917 resultados para Computational lambda-calculus
Resumo:
The refinement calculus provides a framework for the stepwise development of imperative programs from specifications. In this paper we study a refinement calculus for deriving logic programs. Dealing with logic programs rather than imperative programs has the dual advantages that, due to the expressive power of logic programs, the final program is closer to the original specification, and each refinement step can achieve more. Together these reduce the overall number of derivation steps. We present a logic programming language extended with specification constructs (including general predicates, assertions, and types and invariants) to form a wide-spectrum language. General predicates allow non-executable properties to be included in specifications. Assertions, types and invariants make assumptions about the intended inputs of a procedure explicit, and can be used during refinement to optimize the constructed logic program. We provide a semantics for the extended logic programming language and derive a set of refinement laws. Finally we apply these to an example derivation.
Resumo:
Human leukocyte antigen (HLA) haplotypes are frequently evaluated for population history inferences and association studies. However, the available typing techniques for the main HLA loci usually do not allow the determination of the allele phase and the constitution of a haplotype, which may be obtained by a very time-consuming and expensive family-based segregation study. Without the family-based study, computational inference by probabilistic models is necessary to obtain haplotypes. Several authors have used the expectation-maximization (EM) algorithm to determine HLA haplotypes, but high levels of erroneous inferences are expected because of the genetic distance among the main HLA loci and the presence of several recombination hotspots. In order to evaluate the efficiency of computational inference methods, 763 unrelated individuals stratified into three different datasets had their haplotypes manually defined in a family-based study of HLA-A, -B, -DRB1 and -DQB1 segregation, and these haplotypes were compared with the data obtained by the following three methods: the Expectation-Maximization (EM) and Excoffier-Laval-Balding (ELB) algorithms using the arlequin 3.11 software, and the PHASE method. When comparing the methods, we observed that all algorithms showed a poor performance for haplotype reconstruction with distant loci, estimating incorrect haplotypes for 38%-57% of the samples considering all algorithms and datasets. We suggest that computational haplotype inferences involving low-resolution HLA-A, HLA-B, HLA-DRB1 and HLA-DQB1 haplotypes should be considered with caution.
Resumo:
A 4-wheel is a simple graph on 5 vertices with 8 edges, formed by taking a 4-cycle and joining a fifth vertex (the centre of the 4-wheel) to each of the other four vertices. A lambda -fold 4-wheel system of order n is an edge-disjoint decomposition of the complete multigraph lambdaK(n) into 4-wheels. Here, with five isolated possible exceptions when lambda = 2, we give necessary and sufficient conditions for a lambda -fold 4-wheel system of order n to be transformed into a lambda -fold Ccyde system of order n by removing the centre vertex from each 4-wheel, and its four adjacent edges (retaining the 4-cycle wheel rim), and reassembling these edges adjacent to wheel centres into 4-cycles.
Resumo:
In this paper necessary and sufficient conditions are given for the metamorphosis of a lambda-fold K-3,K-3-design of order n into a lambda-fold 6-cycle system of order n, by retaining one 6-cycle subgraph from each copy of K-3,K-3, and then rearranging the set of all the remaining edges, three from each K-3,K-3, into further 6-cycles so that the result is a lambda-fold 6-cycle system.
Resumo:
Existing refinement calculi provide frameworks for the stepwise development of imperative programs from specifications. This paper presents a refinement calculus for deriving logic programs. The calculus contains a wide-spectrum logic programming language, including executable constructs such as sequential conjunction, disjunction, and existential quantification, as well as specification constructs such as general predicates, assumptions and universal quantification. A declarative semantics is defined for this wide-spectrum language based on executions. Executions are partial functions from states to states, where a state is represented as a set of bindings. The semantics is used to define the meaning of programs and specifications, including parameters and recursion. To complete the calculus, a notion of correctness-preserving refinement over programs in the wide-spectrum language is defined and refinement laws for developing programs are introduced. The refinement calculus is illustrated using example derivations and prototype tool support is discussed.
Resumo:
This theoretical note describes an expansion of the behavioral prediction equation, in line with the greater complexity encountered in models of structured learning theory (R. B. Cattell, 1996a). This presents learning theory with a vector substitute for the simpler scalar quantities by which traditional Pavlovian-Skinnerian models have hitherto been represented. Structured learning can be demonstrated by vector changes across a range of intrapersonal psychological variables (ability, personality, motivation, and state constructs). Its use with motivational dynamic trait measures (R. B. Cattell, 1985) should reveal new theoretical possibilities for scientifically monitoring change processes (dynamic calculus model; R. B. Cattell, 1996b), such as encountered within psycho therapeutic settings (R. B. Cattell, 1987). The enhanced behavioral prediction equation suggests that static conceptualizations of personality structure such as the Big Five model are less than optimal.
Resumo:
In computer simulations of smooth dynamical systems, the original phase space is replaced by machine arithmetic, which is a finite set. The resulting spatially discretized dynamical systems do not inherit all functional properties of the original systems, such as surjectivity and existence of absolutely continuous invariant measures. This can lead to computational collapse to fixed points or short cycles. The paper studies loss of such properties in spatial discretizations of dynamical systems induced by unimodal mappings of the unit interval. The problem reduces to studying set-valued negative semitrajectories of the discretized system. As the grid is refined, the asymptotic behavior of the cardinality structure of the semitrajectories follows probabilistic laws corresponding to a branching process. The transition probabilities of this process are explicitly calculated. These results are illustrated by the example of the discretized logistic mapping.
Resumo:
Signal peptides and transmembrane helices both contain a stretch of hydrophobic amino acids. This common feature makes it difficult for signal peptide and transmembrane helix predictors to correctly assign identity to stretches of hydrophobic residues near the N-terminal methionine of a protein sequence. The inability to reliably distinguish between N-terminal transmembrane helix and signal peptide is an error with serious consequences for the prediction of protein secretory status or transmembrane topology. In this study, we report a new method for differentiating protein N-terminal signal peptides and transmembrane helices. Based on the sequence features extracted from hydrophobic regions (amino acid frequency, hydrophobicity, and the start position), we set up discriminant functions and examined them on non-redundant datasets with jackknife tests. This method can incorporate other signal peptide prediction methods and achieve higher prediction accuracy. For Gram-negative bacterial proteins, 95.7% of N-terminal signal peptides and transmembrane helices can be correctly predicted (coefficient 0.90). Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 99% (coefficient 0.92). For eukaryotic proteins, 94.2% of N-terminal signal peptides and transmembrane helices can be correctly predicted with coefficient 0.83. Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 87% (coefficient 0.85). The method can be used to complement current transmembrane protein prediction and signal peptide prediction methods to improve their prediction accuracies. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
The Timed Interval Calculus, a timed-trace formalism based on set theory, is introduced. It is extended with an induction law and a unit for concatenation, which facilitates the proof of properties over trace histories. The effectiveness of the extended Timed Interval Calculus is demonstrated via a benchmark case study, the mine pump. Specifically, a safety property relating to the operation of a mine shaft is proved, based on an implementation of the mine pump and assumptions about the environment of the mine. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Over the last decade component-based software development arose as a promising paradigm to deal with the ever increasing complexity in software design, evolution and reuse. SHACC is a prototyping tool for component-based systems in which components are modelled coinductively as generalized Mealy machines. The prototype is built as a HASKELL library endowed with a graphical user interface developed in Swing
Resumo:
An alternative vector control method, using lambda-cyhalothrin impregnated wide-mesh gauze covering openings in the walls of the houses was developed in an area in the Eastern part of the interior of Suriname. Experimental hut observations showed that Anopheles darlingi greatly reduced their biting activity (99-100%) during the first 5 months after impregnation. A model assay showed high mortality both of mosquitoes repelled by the gauze as well as of those that succeeded in getting through it. A field application test in 270 huts showed good acceptance by the population and good durability of the applied gauze. After introducing the method in the entire working area, replacing DDT residual housespraying, the malaria prevalence, of 25-37% before application dropped and stabilized at between 5 and 10% within one year. The operational costs were less than those of the previously used DDT housespraying program, due to a 50% reduction in the cost of materials used. The method using widemesh gauze impregnated with lambdacyhalothrin strongly affects the behavior of An. darlingi. It is important to examine the effect of the method on malaria transmission further, since data indirectly obtained suggest substantial positive results.
Computational evaluation of hydraulic system behaviour with entrapped air under rapid pressurization
Resumo:
The pressurization of hydraulic systems containing entrapped air is considered a critical condition for the infrastructure's security due to transient pressure variations often occurred. The objective of the present study is the computational evaluation of trends observed in variation of maximum surge pressure resulting from rapid pressurizations. The comparison of the results with those obtained in previous studies is also undertaken. A brief state of art in this domain is presented. This research work is applied to an experimental system having entrapped air in the top of a vertical pipe section. The evaluation is developed through the elastic model based on the method of characteristics, considering a moving liquid boundary, with the results being compared with those achieved with the rigid liquid column model.
Computational evaluation of hydraulic system behaviour with entrapped air under rapid pressurization
Resumo:
The pressurization of hydraulic systems containing entrapped air is considered a critical condition for the infrastructure's security due to transient pressure variations often occurred. The objective of the present study is the computational evaluation of trends observed in variation of maximum surge pressure resulting from rapid pressurizations. The comparison of the results with those obtained in previous studies is also undertaken. A brief state of art in this domain is presented. This research work is applied to an experimental system having entrapped air in the top of a vertical pipe section. The evaluation is developed through the elastic model based on the method of characteristics, considering a moving liquid boundary, with the results being compared with those achieved with the rigid liquid column model.
Resumo:
In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular, when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type. We prove under the common assumptions used in direct search for single objective optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step. Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts.