880 resultados para TCTL (timed computation tree logic)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we follow the BOID (Belief, Obligation, Intention, Desire) architecture to describe agents and agent types in Defeasible Logic. We argue, in particular, that the introduction of obligations can provide a new reading of the concepts of intention and intentionality. Then we examine the notion of social agent (i.e., an agent where obligations prevail over intentions) and discuss some computational and philosophical issues related to it. We show that the notion of social agent either requires more complex computations or has some philosophical drawbacks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While some recent frameworks on cognitive agents addressed the combination of mental attitudes with deontic concepts, they commonly ignore the representation of time. An exception is [1]that manages also some temporal aspects both with respect to cognition and normative provisions. We propose in this paper an extension of the logic presented in [1]with temporal intervals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, Phys. Rev. Lett. (to be published), quant-ph/0402005]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which show that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which nondeterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space. We expect these theorems to have a variety of applications in other areas of quantum-information science.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The generalized Gibbs sampler (GGS) is a recently developed Markov chain Monte Carlo (MCMC) technique that enables Gibbs-like sampling of state spaces that lack a convenient representation in terms of a fixed coordinate system. This paper describes a new sampler, called the tree sampler, which uses the GGS to sample from a state space consisting of phylogenetic trees. The tree sampler is useful for a wide range of phylogenetic applications, including Bayesian, maximum likelihood, and maximum parsimony methods. A fast new algorithm to search for a maximum parsimony phylogeny is presented, using the tree sampler in the context of simulated annealing. The mathematics underlying the algorithm is explained and its time complexity is analyzed. The method is tested on two large data sets consisting of 123 sequences and 500 sequences, respectively. The new algorithm is shown to compare very favorably in terms of speed and accuracy to the program DNAPARS from the PHYLIP package.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We explore of the feasibility of the computationally oriented institutional agency framework proposed by Governatori and Rotolo testing it against an industrial strength scenario. In particular we show how to encode in defeasible logic the dispute resolution policy described in Article 67 of FIDIC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article extends Defeasible Logic to deal with the contextual deliberation process of cognitive agents. First, we introduce meta-rules to reason with rules. Meta-rules are rules that have as a consequent rules for motivational components, such as obligations, intentions and desires. In other words, they include nested rules. Second, we introduce explicit preferences among rules. They deal with complex structures where nested rules can be involved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As part of ACIAR project ASEM/2003/052, Improving Financial Returns to Smallholder Tree Farmers in the Philippines, plantations of timber trees in Leyte Island, the Philippines were located using a systematic survey of the island. The survey was undertaken in order to compile a database of plantations which could be used to guide the planning of project activities. In addition to recording a range of qualitative and quantitative information for each plantation, the survey spatially referenced each site using a Global Positioning System (GPS) to electronic maps of the island which were held in a Geographical Information System (GIS). Microsoft Excel and Mapsource® software were used as the software links between GPS coordinates and the GIS. Mapping of farm positions was complicated by different datums being used for maps of Leyte Island and this caused GPS positions to be displaced from equivalent positions on the map. Photos of the sites were hyperlinked to their map positions in the GIS in order to assist staff to recall site characteristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To compare the Barthel Index (BI), a well-known and accepted measure of functional disability, with Timed Up and Go (TUG). Method: Thirty-three stroke patients had their BI and TUG assessed by independent blinded observers. Results: There was good agreement between BI and TUG, with good repeatability. Conclusion: Thus TUG is a good measure of function pre-discharge but needs to be further validated on more disabled patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poor root development due to constraining soil conditions could be an important factor influencing health of urban trees. Therefore, there is a need for efficient techniques to analyze the spatial distribution of tree roots. An analytical procedure for describing tree rooting patterns from X-ray computed tomography (CT) data is described and illustrated. Large irregularly shaped specimens of undisturbed sandy soil were sampled from Various positions around the base of trees using field impregnation with epoxy resin, to stabilize the cohesionless soil. Cores approximately 200 mm in diameter by 500 mm in height were extracted from these specimens. These large core samples were scanned with a medical X-ray CT device, and contiguous images of soil slices (2 mm thick) were thus produced. X-ray CT images are regarded as regularly-spaced sections through the soil although they are not actual 2D sections but matrices of voxels similar to 0.5 mm x 0.5 mm x 2 mm. The images were used to generate the equivalent of horizontal root contact maps from which three-dimensional objects, assumed to be roots, were reconstructed. The resulting connected objects were used to derive indices of the spatial organization of roots, namely: root length distribution, root length density, root growth angle distribution, root spatial distribution, and branching intensity. The successive steps of the method, from sampling to generation of indices of tree root organization, are illustrated through a case study examining rooting patterns of valuable urban trees. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is devoted to the problems of finding the load flow feasibility, saddle node, and Hopf bifurcation boundaries in the space of power system parameters. The first part contains a review of the existing relevant approaches including not-so-well-known contributions from Russia. The second part presents a new robust method for finding the power system load flow feasibility boundary on the plane defined by any three vectors of dependent variables (nodal voltages), called the Delta plane. The method exploits some quadratic and linear properties of the load now equations and state matrices written in rectangular coordinates. An advantage of the method is that it does not require an iterative solution of nonlinear equations (except the eigenvalue problem). In addition to benefits for visualization, the method is a useful tool for topological studies of power system multiple solution structures and stability domains. Although the power system application is developed, the method can be equally efficient for any quadratic algebraic problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The salticid spider Cosmophasis bitaeniata preys on the larvae of the green tree ant Oecophylla smaragdina. Gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS) reveal that the cuticle of C. bitaeniata mimics the mono- and dimethylalkanes of the cuticle of its prey. Recognition bioassays with extracts of the cuticular hydrocarbons of ants and spiders revealed that foraging major workers did not respond aggressively to the extracts of the spiders or conspecific nestmates, but reacted aggressively to conspecific nonnestmates. Typically, the ants either failed to react (as with control treatments with no extracts) or they reacted nonaggressively as with conspecific nestmates. These data indicate that the qualitative chemical mimicry of ants by C. bitaeniata allows the spiders to avoid detection by major workers of O. smaragdina.