930 resultados para derivation
Synthesis of serial communications controller using higher abstraction level derivation (HALD) model
Resumo:
Data refinements are refinement steps in which a program’s local data structures are changed. Data refinement proof obligations require the software designer to find an abstraction relation that relates the states of the original and new program. In this paper we describe an algorithm that helps a designer find an abstraction relation for a proposed refinement. Given sufficient time and space, the algorithm can find a minimal abstraction relation, and thus show that the refinement holds. As it executes, the algorithm displays mappings that cannot be in any abstraction relation. When the algorithm is not given sufficient resources to terminate, these mappings can help the designer find a suitable abstraction relation. The same algorithm can be used to test an abstraction relation supplied by the designer.
Resumo:
An inherent incomputability in the specification of a functional language extension that combines assertions with dynamic type checking is isolated in an explicit derivation from mathematical specifications. The combination of types and assertions (into "dynamic assertion-types" - DATs) is a significant issue since, because the two are congruent means for program correctness, benefit arises from their better integration in contrast to the harm resulting from their unnecessary separation. However, projecting the "set membership" view of assertion-checking into dynamic types results in some incomputable combinations. Refinement of the specification of DAT checking into an implementation by rigorous application of mathematical identities becomes feasible through the addition of a "best-approximate" pseudo-equality that isolates the incomputable component of the specification. This formal treatment leads to an improved, more maintainable outcome with further development potential.
Resumo:
This research involves a study of the questions, "what is considered safe", how are safety levels defined or decided, and according to whom. Tolerable or acceptable risk questions raise various issues: about values and assumptions inherent in such levels; about decision-making frameworks at the highest level of policy making as well as on the individual level; and about the suitability and competency of decision-makers to decide and to communicate their decisions. The wide-ranging topics covering philosophical and practical concerns examined in the literature review reveal the multi-disciplined scope of this research. To support this theoretical study empirical research was undertaken at the European Space Research and Technology Centre (ESTEC) of the European Space Agency (ESA). ESTEC is a large, multi-nationality, high technology organisation which presented an ideal case study for exploring how decisions are made with respect to safety from a personal as well as organisational aspect. A qualitative methodology was employed to gather, analyse and report the findings of this research. Significant findings reveal how experts perceive risks and the prevalence of informal decision-making processes partly due to the inadequacy of formal methods for deciding risk tolerability. In the field of occupational health and safety, this research has highlighted the importance and need for criteria to decide whether a risk is great enough to warrant attention in setting standards and priorities for risk control and resources. From a wider perspective and with the recognition that risk is an inherent part of life, the establishment of tolerability risk levels can be viewed as cornerstones indicating our progress, expectations and values, of life and work, in an increasingly litigious, knowledgeable and global society.
Resumo:
In efficiency studies using the stochastic frontier approach, the main focus is to explain inefficiency in terms of some exogenous variables and computation of marginal effects of each of these determinants. Although inefficiency is estimated by its mean conditional on the composed error term (the Jondrow et al., 1982 estimator), the marginal effects are computed from the unconditional mean of inefficiency (Wang, 2002). In this paper we derive the marginal effects based on the Jondrow et al. estimator and use the bootstrap method to compute confidence intervals of the marginal effects.
Resumo:
Formal grammars can used for describing complex repeatable structures such as DNA sequences. In this paper, we describe the structural composition of DNA sequences using a context-free stochastic L-grammar. L-grammars are a special class of parallel grammars that can model the growth of living organisms, e.g. plant development, and model the morphology of a variety of organisms. We believe that parallel grammars also can be used for modeling genetic mechanisms and sequences such as promoters. Promoters are short regulatory DNA sequences located upstream of a gene. Detection of promoters in DNA sequences is important for successful gene prediction. Promoters can be recognized by certain patterns that are conserved within a species, but there are many exceptions which makes the promoter recognition a complex problem. We replace the problem of promoter recognition by induction of context-free stochastic L-grammar rules, which are later used for the structural analysis of promoter sequences. L-grammar rules are derived automatically from the drosophila and vertebrate promoter datasets using a genetic programming technique and their fitness is evaluated using a Support Vector Machine (SVM) classifier. The artificial promoter sequences generated using the derived L- grammar rules are analyzed and compared with natural promoter sequences.
Resumo:
Lake Analyzer is a numerical code coupled with supporting visualization tools for determining indices of mixing and stratification that are critical to the biogeochemical cycles of lakes and reservoirs. Stability indices, including Lake Number, Wedderburn Number, Schmidt Stability, and thermocline depth are calculated according to established literature definitions and returned to the user in a time series format. The program was created for the analysis of high-frequency data collected from instrumented lake buoys, in support of the emerging field of aquatic sensor network science. Available outputs for the Lake Analyzer program are: water temperature (error-checked and/or down-sampled), wind speed (error-checked and/or down-sampled), metalimnion extent (top and bottom), thermocline depth, friction velocity, Lake Number, Wedderburn Number, Schmidt Stability, mode-1 vertical seiche period, and Brunt-Väisälä buoyancy frequency. Secondary outputs for several of these indices delineate the parent thermocline depth (seasonal thermocline) from the shallower secondary or diurnal thermocline. Lake Analyzer provides a program suite and best practices for the comparison of mixing and stratification indices in lakes across gradients of climate, hydro-physiography, and time, and enables a more detailed understanding of the resulting biogeochemical transformations at different spatial and temporal scales.