934 resultados para Computer Science(all)
Resumo:
The notion of compensation is widely used in advanced transaction models as means of recovery from a failure. Similar concepts are adopted for providing transaction-like behaviour for long business processes supported by workflows technology. In general, it is not trivial to design compensating tasks for tasks in the context of a workflow. Actually, a task in a workflow process does not have to be compensatable in the sense that the forcibility of reverse operations of the task is not always guaranteed by the application semantics. In addition, the isolation requirement on data resources may make a task difficult to compensate. In this paper, we first look into the requirements that a compensating task has to satisfy. Then we introduce a new concept called confirmation. With the help of confirmation, we are able to modify most non-compensatable tasks so that they become compensatable. This can substantially increase the availability of shared resources and greatly improve backward recovery for workflow applications in case of failures. To effectively incorporate confirmation and compensation into a workflow management environment, a three level bottom-up workflow design method is introduced. The implementation issues of this design are also discussed. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
The paper presents a computational system based upon formal principles to run spatial models for environmental processes. The simulator is named SimuMap because it is typically used to simulate spatial processes over a mapped representation of terrain. A model is formally represented in SimuMap as a set of coupled sub-models. The paper considers the situation where spatial processes operate at different time levels, but are still integrated. An example of such a situation commonly occurs in watershed hydrology where overland flow and stream channel flow have very different flow rates but are highly related as they are subject to the same terrain runoff processes. SimuMap is able to run a network of sub-models that express different time-space derivatives for water flow processes. Sub-models may be coded generically with a map algebra programming language that uses a surface data model. To address the problem of differing time levels in simulation, the paper: (i) reviews general approaches for numerical solvers, (ii) considers the constraints that need to be enforced to use more adaptive time steps in discrete time specified simulations, and (iii) scaling transfer rates in equations that use different time bases for time-space derivatives. A multistep scheme is proposed for SimuMap. This is presented along with a description of its visual programming interface, its modelling formalisms and future plans. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively. This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by model-based specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications.
Resumo:
Minimum/maximum autocorrelation factor (MAF) is a suitable algorithm for orthogonalization of a vector random field. Orthogonalization avoids the use of multivariate geostatistics during joint stochastic modeling of geological attributes. This manuscript demonstrates in a practical way that computation of MAF is the same as discriminant analysis of the nested structures. Mathematica software is used to illustrate MAF calculations from a linear model of coregionalization (LMC) model. The limitation of two nested structures in the LMC for MAF is also discussed and linked to the effects of anisotropy and support. The analysis elucidates the matrix properties behind the approach and clarifies relationships that may be useful for model-based approaches. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Cyclic peptides containing oxazole and thiazole heterocycles have been examined for their capacity to be used as scaffolds in larger, more complex, protein-like structures. Both the macrocyclic scaffolds and the supramolecular structures derived therefrom have been visualised by molecular modelling techniques. These molecules are too symmetrical to examine structurally by NMR spectroscopy. The cyclic hexapeptide ([Aaa-Thz](3), [Aaa-Oxz](3)) and cyclic octapeptide ([Aaa-Thz](4), [Aaa-Oxz](4)) analogues are composed of dipeptide surrogates (Aaa: amino acid, Thz: thiazole, Oxz: oxazole) derived from intramolecular condensation of cysteine or serine/threonine side chains in dipeptides like Aaa-Cys, Aaa-Ser and Aaa-Thr. The five-membered heterocyclic rings, like thiazole, oxazole and reduced analogues like thiazoline, thiazolidine and oxazoline have profound influences on the structures and bioactivities of cyclic peptides derived therefrom. This work suggests that such constrained cyclic peptides can be used as scaffolds to create a range of novel protein-like supramolecular structures (e.g. cylinders, troughs, cones, multi-loop structures, helix bundles) that are comparable in size, shape and composition to bioactive surfaces of proteins. They may therefore represent interesting starting points for the design of novel artificial proteins and artificial enzymes. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
This combined PET and ERP study was designed to identify the brain regions activated in switching and divided attention between different features of a single object using matched sensory stimuli and motor response. The ERP data have previously been reported in this journal [64]. We now present the corresponding PET data. We identified partially overlapping neural networks with paradigms requiring the switching or dividing of attention between the elements of complex visual stimuli. Regions of activation were found in the prefrontal and temporal cortices and cerebellum. Each task resulted in different prefrontal cortical regions of activation lending support to the functional subspecialisation of the prefrontal and temporal cortices being based on the cognitive operations required rather than the stimuli themselves. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
This study extends previous media equation research, which showed that the effects of flattery from a computer can produce the same general effects as flattery from humans. Specifically, the study explored the potential moderating effect of experience on the impact of flattery from a computer. One hundred and fifty-eight students from the University of Queensland voluntarily participated in the study. Participants interacted with a computer and were exposed to one of three kinds of feedback: praise (sincere praise), flattery (insincere praise), or control (generic feedback). Questionnaire measures assessing participants' affective state. attitudes and opinions were taken. Participants of high experience, but not low experience, displayed a media equation pattern of results, reacting to flattery from a computer in a manner congruent with peoples' reactions to flattery from other humans. High experience participants tended to believe that the computer spoke the truth, experienced more positive affect as a result of flattery, and judged the computer's performance more favourably. These findings are interpreted in light of previous research and the implications for software design in fields such as entertainment and education are considered. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
A variety of current and future wired and wireless networking technologies can be transformed into a seamless communication environments through application of context-based vertical handovers. Such seamless communication environments are needed for future pervasive/ubiquitous systems. Pervasive systems are context aware and need to adapt to context changes, including network disconnections and changes in network Quality of Service (QoS). Vertical handover is one of many possible adaptation methods. It allows users to roam freely between heterogeneous networks while maintaining the continuity of their applications. This paper proposes a vertical handover mechanism suitable for multimedia applications in pervasive systems. The paper focuses on the handover decision making process which uses context information regarding user devices, user location, network environment and requested QoS. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Interconnecting business processes across systems and organisations is considered to provide significant benefits, such as greater process transparency, higher degrees of integration, facilitation of communication, and consequently higher throughput in a given time interval. However, to achieve these benefits requires tackling constraints. In the context of this paper these are privacy-requirements of the involved workflows and their mutual dependencies. Workflow views are a promising conceptional approach to address the issue of privacy; however this approach requires addressing the issue of interdependencies between workflow view and adjacent private workflow. In this paper we focus on three aspects concerning the support for execution of cross-organisational workflows that have been modelled with a workflow view approach: (i) communication between the entities of a view-based workflow model, (ii) their impact on an extended workflow engine, and (iii) the design of a cross-organisational workflow architecture (CWA). We consider communication aspects in terms of state dependencies and control flow dependencies. We propose to tightly couple private workflow and workflow view with state dependencies, whilst to loosely couple workflow views with control flow dependencies. We introduce a Petri-Net-based state transition approach that binds states of private workflow tasks to their adjacent workflow view-task. On the basis of these communication aspects we develop a CWA for view-based cross-organisational workflow execution. Its concepts are valid for mediated and unmediated interactions and express no choice of a particular technology. The concepts are demonstrated by a scenario, run by two extended workflow management systems. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Participation in at least 30 min of moderate intensity activity on most days is assumed to confer health benefits. This study accordingly determined whether the more vigorous household and garden tasks (sweeping, window cleaning, vacuuming and lawn mowing) are performed by middle-aged men at a moderate intensity of 3-6 metabolic equivalents (METs) in the laboratory and at home. Measured energy expenditure during self-perceived moderate-paced walking was used as a marker of exercise intensity. Energy expenditure was also predicted via indirect methods. Thirty-six males [Xmacr (SD): 40.0 (3.3) years; 179.5 (6.9) cm; 83.4 (14.0) kg] were measured for resting metabolic rate (RMR) and oxygen consumption (V.O-2) during the five activities using the Douglas bag method. Heart rate , respiratory frequency, CSA (Computer Science Applications) movement counts, Borg scale ratings of perceived exertion and Quetelet's index were also recorded as potential predictors of exercise intensity. Except for vacuuming in the laboratory, which was not significantly different from 3.0 METs (P=0.98), the MET means in the laboratory and home were all significantly greater than 3.0 (Pless than or equal to0.006). The sweeping and vacuuming MET means were significantly higher (P
Resumo:
In this work we discuss the effects of white and coloured noise perturbations on the parameters of a mathematical model of bacteriophage infection introduced by Beretta and Kuang in [Math. Biosc. 149 (1998) 57]. We numerically simulate the strong solutions of the resulting systems of stochastic ordinary differential equations (SDEs), with respect to the global error, by means of numerical methods of both Euler-Taylor expansion and stochastic Runge-Kutta type. (C) 2003 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Cox's theorem states that, under certain assumptions, any measure of belief is isomorphic to a probability measure. This theorem, although intended as a justification of the subjectivist interpretation of probability theory, is sometimes presented as an argument for more controversial theses. Of particular interest is the thesis that the only coherent means of representing uncertainty is via the probability calculus. In this paper I examine the logical assumptions of Cox's theorem and I show how these impinge on the philosophical conclusions thought to be supported by the theorem. I show that the more controversial thesis is not supported by Cox's theorem. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.