945 resultados para Generalized Derivation
Synthesis of serial communications controller using higher abstraction level derivation (HALD) model
Resumo:
Data refinements are refinement steps in which a program’s local data structures are changed. Data refinement proof obligations require the software designer to find an abstraction relation that relates the states of the original and new program. In this paper we describe an algorithm that helps a designer find an abstraction relation for a proposed refinement. Given sufficient time and space, the algorithm can find a minimal abstraction relation, and thus show that the refinement holds. As it executes, the algorithm displays mappings that cannot be in any abstraction relation. When the algorithm is not given sufficient resources to terminate, these mappings can help the designer find a suitable abstraction relation. The same algorithm can be used to test an abstraction relation supplied by the designer.
Resumo:
An inherent incomputability in the specification of a functional language extension that combines assertions with dynamic type checking is isolated in an explicit derivation from mathematical specifications. The combination of types and assertions (into "dynamic assertion-types" - DATs) is a significant issue since, because the two are congruent means for program correctness, benefit arises from their better integration in contrast to the harm resulting from their unnecessary separation. However, projecting the "set membership" view of assertion-checking into dynamic types results in some incomputable combinations. Refinement of the specification of DAT checking into an implementation by rigorous application of mathematical identities becomes feasible through the addition of a "best-approximate" pseudo-equality that isolates the incomputable component of the specification. This formal treatment leads to an improved, more maintainable outcome with further development potential.
Resumo:
In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.
Resumo:
Several indices of plant capacity utilization based on the concept of best practice frontier have been proposed in the literature (Fare et al. 1992; De Borger and Kerstens, 1998). This paper suggests an alternative measure of capacity utilization change based on Generalized Malmquist index, proposed by Grifell-Tatje' and Lovell in 1998. The advantage of this specification is that it allows the measurement of productivity growth ignoring the nature of scale economies. Afterwards, this index is used to measure capacity change of a panel of Italian firms over the period 1989-94 using Data Envelopment Analysis and then its abilities of explaining the short-run movements of output are assessed.
Resumo:
This research involves a study of the questions, "what is considered safe", how are safety levels defined or decided, and according to whom. Tolerable or acceptable risk questions raise various issues: about values and assumptions inherent in such levels; about decision-making frameworks at the highest level of policy making as well as on the individual level; and about the suitability and competency of decision-makers to decide and to communicate their decisions. The wide-ranging topics covering philosophical and practical concerns examined in the literature review reveal the multi-disciplined scope of this research. To support this theoretical study empirical research was undertaken at the European Space Research and Technology Centre (ESTEC) of the European Space Agency (ESA). ESTEC is a large, multi-nationality, high technology organisation which presented an ideal case study for exploring how decisions are made with respect to safety from a personal as well as organisational aspect. A qualitative methodology was employed to gather, analyse and report the findings of this research. Significant findings reveal how experts perceive risks and the prevalence of informal decision-making processes partly due to the inadequacy of formal methods for deciding risk tolerability. In the field of occupational health and safety, this research has highlighted the importance and need for criteria to decide whether a risk is great enough to warrant attention in setting standards and priorities for risk control and resources. From a wider perspective and with the recognition that risk is an inherent part of life, the establishment of tolerability risk levels can be viewed as cornerstones indicating our progress, expectations and values, of life and work, in an increasingly litigious, knowledgeable and global society.
Resumo:
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Resumo:
Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.