998 resultados para 319


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability of an agent to make quick, rational decisions in an uncertain environment is paramount for its applicability in realistic settings. Markov Decision Processes (MDP) provide such a framework, but can only model uncertainty that can be expressed as probabilities. Possibilistic counterparts of MDPs allow to model imprecise beliefs, yet they cannot accurately represent probabilistic sources of uncertainty and they lack the efficient online solvers found in the probabilistic MDP community. In this paper we advance the state of the art in three important ways. Firstly, we propose the first online planner for possibilistic MDP by adapting the Monte-Carlo Tree Search (MCTS) algorithm. A key component is the development of efficient search structures to sample possibility distributions based on the DPY transformation as introduced by Dubois, Prade, and Yager. Secondly, we introduce a hybrid MDP model that allows us to express both possibilistic and probabilistic uncertainty, where the hybrid model is a proper extension of both probabilistic and possibilistic MDPs. Thirdly, we demonstrate that MCTS algorithms can readily be applied to solve such hybrid models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work was to study the possible deactivation effects of biogas trace ammonia concentrations on methanation catalysts. It was found that small amounts of ammonia led to a slight decrease in the catalyst activity. A decrease in the catalyst deactivation by carbon formation was also observed, with ammonia absorbed on the active catalyst sites. This was via a suppression of the carbon formation and deposition on the catalyst, since it requires a higher number of active sites than for the methanation of carbon oxides. From the paper findings, no special pretreatment for ammonia removal from the biogas fed to a methanation process is required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Learning Bayesian networks with bounded tree-width has attracted much attention recently, because low tree-width allows exact inference to be performed efficiently. Some existing methods [12, 14] tackle the problem by using k-trees to learn the optimal Bayesian network with tree-width up to k. In this paper, we propose a sampling method to efficiently find representative k-trees by introducing an Informative score function to characterize the quality of a k-tree. The proposed algorithm can efficiently learn a Bayesian network with tree-width at most k. Experiment results indicate that our approach is comparable with exact methods, but is much more computationally efficient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of linking web search queries to entities from a knowledge base such as Wikipedia. Such linking enables converting a user’s web search session to a footprint in the knowledge base that could be used to enrich the user profile. Traditional methods for entity linking have been directed towards finding entity mentions in text documents such as news reports, each of which are possibly linked to multiple entities enabling the usage of measures like entity set coherence. Since web search queries are very small text fragments, such criteria that rely on existence of a multitude of mentions do not work too well on them. We propose a three-phase method for linking web search queries to wikipedia entities. The first phase does IR-style scoring of entities against the search query to narrow down to a subset of entities that are expanded using hyperlink information in the second phase to a larger set. Lastly, we use a graph traversal approach to identify the top entities to link the query to. Through an empirical evaluation on real-world web search queries, we illustrate that our methods significantly enhance the linking accuracy over state-of-the-art methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parasites have a variety of behavioural effects on their hosts, which can in turn affect species with which the host interacts. Here we review how these trait-mediated indirect effects of parasites can alter the outcomes of invader-native interactions, illustrating with examples from the literature and with particular regard to the invader-native crustacean systems studied in our laboratories. Parasites may potentially inhibit or exacerbate invasions via their effects on host behaviour, in addition to their direct virulence effects on hosts. In several crustacean systems, we have found that parasites influence both host predation rates on intra- and inter-guild prey and host vulnerability to being preyed upon. These trait effects can theoretically alter invasion impact and patterns of coexistence, as they indirectly affect interactions between predators and prey with the potential for further ramifications to other species in the food web. The fitness consequences of parasite-induced trait-mediated effects are rarely considered in traditional parasitological contexts, but demand attention in the context of ecological communities. We can regard these trait effects as a form of cryptic virulence that only becomes apparent when hosts are examined in the context of the other species with which they interact.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, the internet has become a key site for the portrayal of Rio de Janeiro’s favelas. This article examines blogging by favela residents and argues that digital culture constitutes a vital, and as yet not systematically explored, arena of research on the representation of Rio de Janeiro and its favelas. Based on ethnographically inspired research carried out in 2009–2010, this article examines two examples of blog ‘framing content’ (a sidebar and a static page) encountered during fieldwork, which functioned to establish a concrete link between the posts on the blogs in question, their authors, and a named favela, even when the posts were not explicitly about that favela. At the same time, the framing content also made visible, and affirmed, the translocal connections between that favela, other favelas, and the city as a whole. These illustrative examples from a wider study show how favela bloggers are engaged in resignifying and remapping the relationships between different empirical scales of locality (and associated identities) in Rio de Janeiro, demonstrating the contribution an interdisciplinary approach to the digital texts and practices of favela residents can make to an understanding of the contemporary city and its representational conundrums, from the perspective of ‘ordinary practitioners’.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud data centres are implemented as large-scale clusters with demanding requirements for service performance, availability and cost of operation. As a result of scale and complexity, data centres typically exhibit large numbers of system anomalies resulting from operator error, resource over/under provisioning, hardware or software failures and security issus anomalies are inherently difficult to identify and resolve promptly via human inspection. Therefore, it is vital in a cloud system to have automatic system monitoring that detects potential anomalies and identifies their source. In this paper we present a lightweight anomaly detection tool for Cloud data centres which combines extended log analysis and rigorous correlation of system metrics, implemented by an efficient correlation algorithm which does not require training or complex infrastructure set up. The LADT algorithm is based on the premise that there is a strong correlation between node level and VM level metrics in a cloud system. This correlation will drop significantly in the event of any performance anomaly at the node-level and a continuous drop in the correlation can indicate the presence of a true anomaly in the node. The log analysis of LADT assists in determining whether the correlation drop could be caused by naturally occurring cloud management activity such as VM migration, creation, suspension, termination or resizing. In this way, any potential anomaly alerts are reasoned about to prevent false positives that could be caused by the cloud operator’s activity. We demonstrate LADT with log analysis in a Cloud environment to show how the log analysis is combined with the correlation of systems metrics to achieve accurate anomaly detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a method for rational behaviour recognition that combines vision-based pose estimation with knowledge modeling and reasoning. The proposed method consists of two stages. First, RGB-D images are used in the estimation of the body postures. Then, estimated actions are evaluated to verify that they make sense. This method requires rational behaviour to be exhibited. To comply with this requirement, this work proposes a rational RGB-D dataset with two types of sequences, some for training and some for testing. Preliminary results show the addition of knowledge modeling and reasoning leads to a significant increase of recognition accuracy when compared to a system based only on computer vision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract
Publicly available, outdoor webcams continuously view the world and share images. These cameras include traffic cams, campus cams, ski-resort cams, etc. The Archive of Many Outdoor Scenes (AMOS) is a project aiming to geolocate, annotate, archive, and visualize these cameras and images to serve as a resource for a wide variety of scientific applications. The AMOS dataset has archived over 750 million images of outdoor environments from 27,000 webcams since 2006. Our goal is to utilize the AMOS image dataset and crowdsourcing to develop reliable and valid tools to improve physical activity assessment via online, outdoor webcam capture of global physical activity patterns and urban built environment characteristics.
This project’s grand scale-up of capturing physical activity patterns and built environments is a methodological step forward in advancing a real-time, non-labor intensive assessment using webcams, crowdsourcing, and eventually machine learning. The combined use of webcams capturing outdoor scenes every 30 min and crowdsources providing the labor of annotating the scenes allows for accelerated public health surveillance related to physical activity across numerous built environments. The ultimate goal of this public health and computer vision collaboration is to develop machine learning algorithms that will automatically identify and calculate physical activity patterns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Petaflop architectures are currently being utilized efficiently to perform large scale computations in Atomic, Molecular and Optical Collisions. We solve the Schr\"odinger or Dirac equation for the appropriate collision problem using the R-matrix or R-matrix with pseudo-states approach. We briefly outline the parallel methodology used and implemented for the current suite of Breit-Pauli and DARC codes. In this report, various examples are shown from our theoretical results compared with experimental results obtained from Synchrotron Radiation facilities where the Cray architecture at HLRS is playing an integral part in our computational projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Emerging web applications like cloud computing, Big Data and social networks have created the need for powerful centres hosting hundreds of thousands of servers. Currently, the data centres are based on general purpose processors that provide high flexibility buts lack the energy efficiency of customized accelerators. VINEYARD aims to develop an integrated platform for energy-efficient data centres based on new servers with novel, coarse-grain and fine-grain, programmable hardware accelerators. It will, also, build a high-level programming framework for allowing end-users to seamlessly utilize these accelerators in heterogeneous computing systems by employing typical data-centre programming frameworks (e.g. MapReduce, Storm, Spark, etc.). This programming framework will, further, allow the hardware accelerators to be swapped in and out of the heterogeneous infrastructure so as to offer high flexibility and energy efficiency. VINEYARD will foster the expansion of the soft-IP core industry, currently limited in the embedded systems, to the data-centre market. VINEYARD plans to demonstrate the advantages of its approach in three real use-cases (a) a bio-informatics application for high-accuracy brain modeling, (b) two critical financial applications, and (c) a big-data analysis application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently there has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and architectural complexity). Once one has learned a model based on their devised method, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Unfortunately, the standard tests used for this purpose are not able to jointly consider performance measures. The aim of this paper is to resolve this issue by developing statistical procedures that are able to account for multiple competing measures at the same time. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameter of such models, as usually the number of studied cases is very reduced in such comparisons. Real data from a comparison among general purpose classifiers is used to show a practical application of our tests.