858 resultados para Interval Arithmetic Operations
Resumo:
This paper presents a single precision floating point arithmetic unit with support for multiplication, addition, fused multiply-add, reciprocal, square-root and inverse squareroot with high-performance and low resource usage. The design uses a piecewise 2nd order polynomial approximation to implement reciprocal, square-root and inverse square-root. The unit can be configured with any number of operations and is capable to calculate any function with a throughput of one operation per cycle. The floatingpoint multiplier of the unit is also used to implement the polynomial approximation and the fused multiply-add operation. We have compared our implementation with other state-of-the-art proposals, including the Xilinx Core-Gen operators, and conclude that the approach has a high relative performance/area efficiency. © 2014 Technical University of Munich (TUM).
Resumo:
IEEE 754 floating-point arithmetic is widely used in modern, general-purpose computers. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. Modifying the IEEE arithmetic so that it uses transreal arithmetic has a number of advantages. It removes one redundant binade from IEEE floating-point objects, doubling the numerical precision of the arithmetic. It removes eight redundant, relational,floating-point operations and removes the redundant total order operation. It replaces the non-reflexive, floating-point, equality operator with a reflexive equality operator and it indicates that some of the exceptions may be removed as redundant { subject to issues of backward compatibility and transient future compatibility as programmers migrate to the transreal paradigm.
Resumo:
This work presents JFLoat, a software implementation of IEEE-754 standard for binary floating point arithmetic. JFloat was built to provide some features not implemented in Java, specifically directed rounding support. That feature is important for Java-XSC, a project developed in this Department. Also, Java programs should have same portability when using floating point operations, mainly because IEEE-754 specifies that programs should have exactly same behavior on every configuration. However, it was noted that programs using Java native floating point types may be machine and operating system dependent. Also, JFloat is a possible solution to that problem
Resumo:
In this paper we continue Feferman’s unfolding program initiated in (Feferman, vol. 6 of Lecture Notes in Logic, 1996) which uses the concept of the unfolding U(S) of a schematic system S in order to describe those operations, predicates and principles concerning them, which are implicit in the acceptance of S. The program has been carried through for a schematic system of non-finitist arithmetic NFA in Feferman and Strahm (Ann Pure Appl Log, 104(1–3):75–96, 2000) and for a system FA (with and without Bar rule) in Feferman and Strahm (Rev Symb Log, 3(4):665–689, 2010). The present contribution elucidates the concept of unfolding for a basic schematic system FEA of feasible arithmetic. Apart from the operational unfolding U0(FEA) of FEA, we study two full unfolding notions, namely the predicate unfolding U(FEA) and a more general truth unfolding UT(FEA) of FEA, the latter making use of a truth predicate added to the language of the operational unfolding. The main results obtained are that the provably convergent functions on binary words for all three unfolding systems are precisely those being computable in polynomial time. The upper bound computations make essential use of a specific theory of truth TPT over combinatory logic, which has recently been introduced in Eberhard and Strahm (Bull Symb Log, 18(3):474–475, 2012) and Eberhard (A feasible theory of truth over combinatory logic, 2014) and whose involved proof-theoretic analysis is due to Eberhard (A feasible theory of truth over combinatory logic, 2014). The results of this paper were first announced in (Eberhard and Strahm, Bull Symb Log 18(3):474–475, 2012).
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
In Electronic Support, it is well known that periodic search strategies for swept-frequency superheterodyne receivers (SHRs) can cause synchronisation with the radar it seeks to detect. Synchronisation occurs when the periods governing the search strategies of the SHR and radar are commensurate. The result may be that the radar is never detected. This paper considers the synchronisation problem in depth. We find that there are usually a finite number of synchronisation ratios between the radar’s scan period and the SHR’s sweep period. We develop three geometric constructions by which these ratios can be found and we relate them to the Farey series. The ratios may be used to determine the intercept time for any combination of scan and sweep period. This theory can assist the operator of an SHR in selecting a sweep period that minimises the intercept time against a number of radars in a threat emitter list.
Resumo:
The paper has been presented at the 12th International Conference on Applications of Computer Algebra, Varna, Bulgaria, June, 2006
Resumo:
There exist uniquely ergodic affine interval exchange transformations of [0,1] with flips which have wandering intervals and are such that the support of the invariant measure is a Cantor set.
Resumo:
In this study we have used fluorescence spectroscopy to determine the post-mortem interval. Conventional methods in forensic medicine involve tissue or body fluids sampling and laboratory tests, which are often time demanding and may depend on expensive analysis. The presented method consists in using time-dependent variations on the fluorescence spectrum and its correlation with the time elapsed after regular metabolic activity cessation. This new approach addresses unmet needs for post-mortem interval determination in forensic medicine, by providing rapid and in situ measurements that shows improved time resolution relative to existing methods. (C) 2009 Optical Society of America
Diagnostic errors and repetitive sequential classifications in on-line process control by attributes
Resumo:
The procedure of on-line process control by attributes, known as Taguchi`s on-line process control, consists of inspecting the mth item (a single item) at every m produced items and deciding, at each inspection, whether the fraction of conforming items was reduced or not. If the inspected item is nonconforming, the production is stopped for adjustment. As the inspection system can be subject to diagnosis errors, one develops a probabilistic model that classifies repeatedly the examined item until a conforming or b non-conforming classification is observed. The first event that occurs (a conforming classifications or b non-conforming classifications) determines the final classification of the examined item. Proprieties of an ergodic Markov chain were used to get the expression of average cost of the system of control, which can be optimized by three parameters: the sampling interval of the inspections (m); the number of repeated conforming classifications (a); and the number of repeated non-conforming classifications (b). The optimum design is compared with two alternative approaches: the first one consists of a simple preventive policy. The production system is adjusted at every n produced items (no inspection is performed). The second classifies the examined item repeatedly r (fixed) times and considers it conforming if most classification results are conforming. Results indicate that the current proposal performs better than the procedure that fixes the number of repeated classifications and classifies the examined item as conforming if most classifications were conforming. On the other hand, the preventive policy can be averagely the most economical alternative rather than those ones that require inspection depending on the degree of errors and costs. A numerical example illustrates the proposed procedure. (C) 2009 Elsevier B. V. All rights reserved.
Resumo:
The procedure for online process control by attributes consists of inspecting a single item at every m produced items. It is decided on the basis of the inspection result whether the process is in-control (the conforming fraction is stable) or out-of-control (the conforming fraction is decreased, for example). Most articles about online process control have cited the stoppage of the production process for an adjustment when the inspected item is non-conforming (then the production is restarted in-control, here denominated as corrective adjustment). Moreover, the articles related to this subject do not present semi-economical designs (which may yield high quantities of non-conforming items), as they do not include a policy of preventive adjustments (in such case no item is inspected), which can be more economical, mainly if the inspected item can be misclassified. In this article, the possibility of preventive or corrective adjustments in the process is decided at every m produced item. If a preventive adjustment is decided upon, then no item is inspected. On the contrary, the m-th item is inspected; if it conforms, the production goes on, otherwise, an adjustment takes place and the process restarts in-control. This approach is economically feasible for some practical situations and the parameters of the proposed procedure are determined minimizing an average cost function subject to some statistical restrictions (for example, to assure a minimal levelfixed in advanceof conforming items in the production process). Numerical examples illustrate the proposal.
Resumo:
While the physiological adaptations that occur following endurance training in previously sedentary and recreationally active individuals are relatively well understood, the adaptations to training in already highly trained endurance athletes remain unclear. While significant improvements in endurance performance and corresponding physiological markers are evident following submaximal endurance training in sedentary and recreationally active groups, an additional increase in submaximal training (i.e. volume) in highly trained individuals does not appear to further enhance either endurance performance or associated physiological variables [e.g. peak oxygen uptake (V-dot O2peak), oxidative enzyme activity]. It seems that, for athletes who are already trained, improvements in endurance performance can be achieved only through high-intensity interval training (HIT). The limited research which has examined changes in muscle enzyme activity in highly trained athletes, following HIT, has revealed no change in oxidative or glycolytic enzyme activity, despite significant improvements in endurance performance (p < 0.05). Instead, an increase in skeletal muscle buffering capacity may be one mechanism responsible for an improvement in endurance performance. Changes in plasma volume, stroke volume, as well as muscle cation pumps, myoglobin, capillary density and fibre type characteristics have yet to be investigated in response to HIT with the highly trained athlete. Information relating to HIT programme optimisation in endurance athletes is also very sparse. Preliminary work using the velocity at which V-dot O2max is achieved (Vmax) as the interval intensity, and fractions (50 to 75%) of the time to exhaustion at Vmax (Tmax) as the interval duration has been successful in eliciting improvements in performance in long-distance runners. However, Vmax and Tmax have not been used with cyclists. Instead, HIT programme optimisation research in cyclists has revealed that repeated supramaximal sprinting may be equally effective as more traditional HIT programmes for eliciting improvements in endurance performance. Further examination of the biochemical and physiological adaptations which accompany different HIT programmes, as well as investigation into the optimal HIT programme for eliciting performance enhancements in highly trained athletes is required.
Resumo:
Objective: To determine the incidence of interval cancers which occurred in the first 12 months after mammographic screening at a mammographic screening service. Design: Retrospective analysis of data obtained by crossmatching the screening Service and the New South Wales Central Cancer Registry databases. Setting: The Central & Eastern Sydney Service of BreastScreen NSW. Participants: Women aged 40-69 years at first screen, who attended for their first or second screen between 1 March 1988 and 31 December 1992. Main outcome measures: Interval-cancer rates per 10 000 screens and as a proportion of the underlying incidence of breast cancer (as estimated by the underlying rate in the total NSW population). Results: The 12-month interval-cancer incidence per 10 000 screens was 4.17 for the 40-49 years age group (95% confidence interval [CI], 1.35-9.73) and 4.64 for the 50-69 years age group (95% CI, 2.47-7.94). Proportional incidence rates were 30.1% for the 40-49 years age group (95% CI, 9.8-70.3) and 22% for the 50-69 years age group (95% CI, 11.7-37.7). There was no significant difference between the proportional incidence rate for the 50-69 years age group for the Central & Eastern Sydney Service and those of major successful overseas screening trials. Conclusion: Screening quality was acceptable and should result in a significant mortality reduction in the screened population. Given the small number of cancers involved, comparison of interval-cancer statistics of mammographic screening programs with trials requires age-specific or age-adjusted data, and consideration of confidence intervals of both program and trial data.
Resumo:
Current theoretical thinking about dual processes in recognition relies heavily on the measurement operations embodied within the process dissociation procedure. We critically evaluate the ability of this procedure to support this theoretical enterprise. We show that there are alternative processes that would produce a rough invariance in familiarity (a key prediction of the dual-processing approach) and that the process dissociation procedure does not have the power to differentiate between these alternative possibilities. We also show that attempts to relate parameters estimated by the process dissociation procedure to subjective reports (remember-know judgments) cannot differentiate between alternative dual-processing models and that there are problems with some of the historical evidence and with obtaining converging evidence. Our conclusion is that more specific theories incorporating ideas about representation and process are required.
Resumo:
Objective: To evaluate the impact of increasing the minimum resupply period for prescriptions on the Pharmaceutical Benefits Scheme (PBS) in November 1994. The intervention was designed to reduce the stockpiling of medicines used for chronic medical conditions under the PBS safety net. Methods: Interrupted times series regression analyses were performed on 114 months of PBS drug utilisation data from January 1991 to June 2000. These analyses assessed whether there had been a significant interaction between the onset of the intervention in November 1994 and the extreme levels of drug utilisation in the months of December (peak utilisation) and January (lowest utilisation) respectively. Both serial and 12-month lag autocorrelations were controlled for. Results: The onset of the intervention was associated with a significant reduction in the December peak in drug utilisation; after the introduction of the policy there were 1,150,196 fewer proscriptions on average or that month (95% Cl 708,333-1,592,059). There was, however, no significant change in the low level of utilisation in January. The effect of the policy appears to be decreasing across successive postintervention years. though the odds of a prescription being dispensed in December remained significantly lower in 1999 compared to each of the pre-intervention years (11% vs. 14%) Conclusion: Analysis of the impact of increasing the re-supply period for PBS prescriptions showed that the magnitude of peak utilisation in December had been markedly reduced by the policy, though this effect appears to be decreasing over time. Continued monitoring and policy review is warranted in order to ensure that the initial effect of the intervention be maintained.