65 resultados para machine translation programs
em University of Queensland eSpace - Australia
Resumo:
We present an approach to parsing rehive clauses in Arabic in the tradition of the Paninian Grammar Frumework/2] which leads to deriving U common logicul form for equivalent sentences. Particular attention is paid to the analysis of resumptive pronouns in the retrieval of syntuctico-semantic relationships. The analysis arises from the development of a lexicalised dependency grammar for Arabic that has application for machine translation.
Resumo:
Dynamic binary translation is the process of translating, modifying and rewriting executable (binary) code from one machine to another at run-time. This process of low-level re-engineering consists of a reverse engineering phase followed by a forward engineering phase. UQDBT, the University of Queensland Dynamic Binary Translator, is a machine-adaptable translator. Adaptability is provided through the specification of properties of machines and their instruction sets, allowing the support of different pairs of source and target machines. Most binary translators are closely bound to a pair of machines, making analyses and code hard to reuse. Like most virtual machines, UQDBT performs generic optimizations that apply to a variety of machines. Frequently executed code is translated to native code by the use of edge weight instrumentation, which makes UQDBT converge more quickly than systems based on instruction speculation. In this paper, we describe the architecture and run-time feedback optimizations performed by the UQDBT system, and provide results obtained in the x86 and SPARC® platforms.
Resumo:
The real-time refinement calculus is an extension of the standard refinement calculus in which programs are developed from a precondition plus post-condition style of specification. In addition to adapting standard refinement rules to be valid in the real-time context, specific rules are required for the timing constructs such as delays and deadlines. Because many real-time programs may be nonterminating, a further extension is to allow nonterminating repetitions. A real-time specification constrains not only what values should be output, but when they should be output. Hence for a program to implement such a specification, it must guarantee to output values by the specified times. With standard programming languages such guarantees cannot be made without taking into account the timing characteristics of the implementation of the program on a particular machine. To avoid having to consider such details during the refinement process, we have extended our real-time programming language with a deadline command. The deadline command takes no time to execute and always guarantees to meet the specified time; if the deadline has already passed the deadline command is infeasible (miraculous in Dijkstra's terminology). When such a realtime program is compiled for a particular machine, one needs to ensure that all execution paths leading to a deadline are guaranteed to reach it by the specified time. We consider this checking as part of an extended compilation phase. The addition of the deadline command restores for the real-time language the advantage of machine independence enjoyed by non-real-time programming languages.
Resumo:
We define a language and a predicative semantics to model concurrent real-time programs. We consider different communication paradigms between the concurrent components of a program: communication via shared variables and asynchronous message passing (for different models of channels). The semantics is the basis for a refinement calculus to derive machine-independent concurrent real-time programs from specifications. We give some examples of refinement laws that deal with concurrency.
Resumo:
We propose a method for the timing analysis of concurrent real-time programs with hard deadlines. We divide the analysis into a machine-independent and a machine-dependent task. The latter takes into account the execution times of the program on a particular machine. Therefore, our goal is to make the machine-dependent phase of the analysis as simple as possible. We succeed in the sense that the machine-dependent phase remains the same as in the analysis of sequential programs. We shift the complexity introduced by concurrency completely to the machine-independent phase.
Resumo:
Mediated physical activity interventions can reach large numbers of people at low cost. Programs delivered through the mail that target the stage of motivational readiness have been shown to increase activity. Communication technology (websites and e-mail) might provide a means for delivering similar programs. Randomized trial conducted between August and October 2001. Participants included staff at an Australian university (n=655; mean AGE=43, standard deviation, 10 years). Participants were randomized to either an 8-week, stage-targeted print program (Print) or 8-week, stage-targeted website (Web) program. The main outcome was change in self-reported physical activity.
Resumo:
Previous work on generating state machines for the purpose of class testing has not been formally based. There has also been work on deriving state machines from formal specifications for testing non-object-oriented software. We build on this work by presenting a method for deriving a state machine for testing purposes from a formal specification of the class under test. We also show how the resulting state machine can be used as the basis for a test suite developed and executed using an existing framework for class testing. To derive the state machine, we identify the states and possible interactions of the operations of the class under test. The Test Template Framework is used to formally derive the states from the Object-Z specification of the class under test. The transitions of the finite state machine are calculated from the derived states and the class's operations. The formally derived finite state machine is transformed to a ClassBench testgraph, which is used as input to the ClassBench framework to test a C++ implementation of the class. The method is illustrated using a simple bounded queue example.
Resumo:
Recent structural studies of proteins mediating membrane fusion reveal intriguing similarities between diverse viral and mammalian systems. Particularly striking is the close similarity between the transmembrane envelope glycoproteins from the retrovirus HTLV-1 and the filovirus Ebola. These similarities suggest similar mechanisms of membrane fusion. The model that fits most currently available data suggests fusion activation in viral systems is driven by a symmetrical conformational change triggered by an activation event such as receptor binding or a pH change. The mammalian vesicle fusion mediated by the SNARE protein complex most likely occurs by a similar mechanism but without symmetry constraints.
Resumo:
The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary, We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The 0 WEB environment allows for flexible access to the data by both users and computer programs.
Resumo:
Andrew Tooke's 1691 English translation of Samuel Pufendorf's De officio hominis et civis, published as The Whole Duty of Man According to the Law of Nature, brought Pufendorf's manual fo statist natural law into English politics at a moment of temporary equilibrium in the unfinished contest between Crown and Parliament for the rights and powers of sovereignty. Drawing on the authors' re-edition of The Whole Duty of Man, this article describes and analyses a telling instance of how--by translation--the core political terms and concepts of the German natural jurist's 'absolutist' formulary were reshaped for reception in the different political culture of late seventeenth-century England.
Resumo:
1. Cluster analysis of reference sites with similar biota is the initial step in creating River Invertebrate Prediction and Classification System (RIVPACS) and similar river bioassessment models such as Australian River Assessment System (AUSRIVAS). This paper describes and tests an alternative prediction method, Assessment by Nearest Neighbour Analysis (ANNA), based on the same philosophy as RIVPACS and AUSRIVAS but without the grouping step that some people view as artificial. 2. The steps in creating ANNA models are: (i) weighting the predictor variables using a multivariate approach analogous to principal axis correlations, (ii) calculating the weighted Euclidian distance from a test site to the reference sites based on the environmental predictors, (iii) predicting the faunal composition based on the nearest reference sites and (iv) calculating an observed/expected (O/E) analogous to RIVPACS/AUSRIVAS. 3. The paper compares AUSRIVAS and ANNA models on 17 datasets representing a variety of habitats and seasons. First, it examines each model's regressions for Observed versus Expected number of taxa, including the r(2), intercept and slope. Second, the two models' assessments of 79 test sites in New Zealand are compared. Third, the models are compared on test and presumed reference sites along a known trace metal gradient. Fourth, ANNA models are evaluated for western Australia, a geographically distinct region of Australia. The comparisons demonstrate that ANNA and AUSRIVAS are generally equivalent in performance, although ANNA turns out to be potentially more robust for the O versus E regressions and is potentially more accurate on the trace metal gradient sites. 4. The ANNA method is recommended for use in bioassessment of rivers, at least for corroborating the results of the well established AUSRIVAS- and RIVPACS-type models, if not to replace them.