42 resultados para Graph-based approach

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Short text messages a.k.a Microposts (e.g. Tweets) have proven to be an effective channel for revealing information about trends and events, ranging from those related to Disaster (e.g. hurricane Sandy) to those related to Violence (e.g. Egyptian revolution). Being informed about such events as they occur could be extremely important to authorities and emergency professionals by allowing such parties to immediately respond. In this work we study the problem of topic classification (TC) of Microposts, which aims to automatically classify short messages based on the subject(s) discussed in them. The accurate TC of Microposts however is a challenging task since the limited number of tokens in a post often implies a lack of sufficient contextual information. In order to provide contextual information to Microposts, we present and evaluate several graph structures surrounding concepts present in linked knowledge sources (KSs). Traditional TC techniques enrich the content of Microposts with features extracted only from the Microposts content. In contrast our approach relies on the generation of different weighted semantic meta-graphs extracted from linked KSs. We introduce a new semantic graph, called category meta-graph. This novel meta-graph provides a more fine grained categorisation of concepts providing a set of novel semantic features. Our findings show that such category meta-graph features effectively improve the performance of a topic classifier of Microposts. Furthermore our goal is also to understand which semantic feature contributes to the performance of a topic classifier. For this reason we propose an approach for automatic estimation of accuracy loss of a topic classifier on new, unseen Microposts. We introduce and evaluate novel topic similarity measures, which capture the similarity between the KS documents and Microposts at a conceptual level, considering the enriched representation of these documents. Extensive evaluation in the context of Emergency Response (ER) and Violence Detection (VD) revealed that our approach outperforms previous approaches using single KS without linked data and Twitter data only up to 31.4% in terms of F1 measure. Our main findings indicate that the new category graph contains useful information for TC and achieves comparable results to previously used semantic graphs. Furthermore our results also indicate that the accuracy of a topic classifier can be accurately predicted using the enhanced text representation, outperforming previous approaches considering content-based similarity measures. © 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A case study demonstrates the use of a process-based approach to change regarding the implementation of an information system for road traffic accident reporting in a UK police force. The supporting tools of process mapping and business process simulation are used in the change process and assist in communicating the current process design and people's roles in the overall performance of that design. The simulation model is also used to predict the performance of new designs incorporating the use of information technology. The approach is seen to have a number of advantages in the context of a public sector organisation. These include the ability for personnel to move from a traditional grouping of staff in occupational groups with relationships defined by reporting requirements to a view of their role in a process, which delivers a performance to a customer. By running the simulation through time it is also possible to gauge how changes at an operational level can lead to the meeting of strategic targets over time. Also the ability of simulation to proof new designs was seen as particularly important in a government agency were past failures of information technology investments had contributed to a more risk averse approach to their implementation. © 2004 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on Goffman’s definition that frames are general ‘schemata of interpretation’ that people use to ‘locate, perceive, identify, and label’, other scholars have used the concept in a more specific way to analyze media coverage. Frames are used in the sense of organizing devices that allow journalists to select and emphasise topics, to decide ‘what matters’ (Gitlin 1980). Gamson and Modigliani (1989) consider frames as being embedded within ‘media packages’ that can be seen as ‘giving meaning’ to an issue. According to Entman (1993), framing comprises a combination of different activities such as: problem definition, causal interpretation, moral evaluation, and/or treatment recommendation for the item described. Previous research has analysed climate change with the purpose of testing Downs’s model of the issue attention cycle (Trumbo 1996), to uncover media biases in the US press (Boykoff and Boykoff 2004), to highlight differences between nations (Brossard et al. 2004; Grundmann 2007) or to analyze cultural reconstructions of scientific knowledge (Carvalho and Burgess 2005). In this paper we shall present data from a corpus linguistics-based approach. We will be drawing on results of a pilot study conducted in Spring 2008 based on the Nexis news media archive. Based on comparative data from the US, the UK, France and Germany, we aim to show how the climate change issue has been framed differently in these countries and how this framing indicates differences in national climate change policies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of beneficiaries in the humanitarian supply chain is highlighted in the imperative to meet their needs but disputed in terms of their actual decision-making and purchasing power. This paper discusses the use of a beneficiary-focused, community-based approach in the case of a post-crisis housing reconstruction programme. In the community-based approach, beneficiaries become active members of the humanitarian supply chain. Implications of this community-based approach are discussed in the light of supply chain design and aid effectiveness. © 2010 Taylor & Francis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decentralised supply chain formation involves determining the set of producers within a network able to supply goods to one or more consumers at the lowest cost. This problem is frequently tackled using auctions and negotiations. In this paper we show how it can be cast as an optimisation of a pairwise cost function. Optimising this class of functions is NP-hard but good approximations to the global minimum can be obtained using Loopy Belief Propagation (LBP). Here we detail a LBP-based approach to the supply chain formation problem, involving decentralised message-passing between potential participants. Our approach is evaluated against a well-known double-auction method and an optimal centralised technique, showing several improvements: it obtains better solutions for most networks that admit a competitive equilibrium Competitive equilibrium as defined in [3] is used as a means of classifying results on certain networks to allow for minor inefficiencies in their auction protocol and agent bidding strategies. while also solving problems where no competitive equilibrium exists, for which the double-auction method frequently produces inefficient solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a Cauchy problem for the Laplace equation in a bounded region containing a cut, where the region is formed by removing a sufficiently smooth arc (the cut) from a bounded simply connected domain D. The aim is to reconstruct the solution on the cut from the values of the solution and its normal derivative on the boundary of the domain D. We propose an alternating iterative method which involves solving direct mixed problems for the Laplace operator in the same region. These mixed problems have either a Dirichlet or a Neumann boundary condition imposed on the cut and are solved by a potential approach. Each of these mixed problems is reduced to a system of integral equations of the first kind with logarithmic and hypersingular kernels and at most a square root singularity in the densities at the endpoints of the cut. The full discretization of the direct problems is realized by a trigonometric quadrature method which has super-algebraic convergence. The numerical examples presented illustrate the feasibility of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel market-based method, inspired by retail markets, for resource allocation in fully decentralised systems where agents are self-interested. Our market mechanism requires no coordinating node or complex negotiation. The stability of outcome allocations, those at equilibrium, is analysed and compared for three buyer behaviour models. In order to capture the interaction between self-interested agents, we propose the use of competitive coevolution. Our approach is both highly scalable and may be tuned to achieve specified outcome resource allocations. We demonstrate the behaviour of our approach in simulation, where evolutionary market agents act on behalf of service providing nodes to adaptively price their resources over time, in response to market conditions. We show that this leads the system to the predicted outcome resource allocation. Furthermore, the system remains stable in the presence of small changes in price, when buyers' decision functions degrade gracefully. © 2009 The Author(s).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data envelopment analysis (DEA) has been proven as an excellent data-oriented efficiency analysis method for comparing decision making units (DMUs) with multiple inputs and multiple outputs. In conventional DEA, it is assumed that the status of each measure is clearly known as either input or output. However, in some situations, a performance measure can play input role for some DMUs and output role for others. Cook and Zhu [Eur. J. Oper. Res. 180 (2007) 692–699] referred to these variables as flexible measures. The paper proposes an alternative model in which each flexible measure is treated as either input or output variable to maximize the technical efficiency of the DMU under evaluation. The main focus of this paper is on the impact that the flexible measures has on the definition of the PPS and the assessment of technical efficiency. An example in UK higher education intuitions shows applicability of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background - Modelling the interaction between potentially antigenic peptides and Major Histocompatibility Complex (MHC) molecules is a key step in identifying potential T-cell epitopes. For Class II MHC alleles, the binding groove is open at both ends, causing ambiguity in the positional alignment between the groove and peptide, as well as creating uncertainty as to what parts of the peptide interact with the MHC. Moreover, the antigenic peptides have variable lengths, making naive modelling methods difficult to apply. This paper introduces a kernel method that can handle variable length peptides effectively by quantifying similarities between peptide sequences and integrating these into the kernel. Results - The kernel approach presented here shows increased prediction accuracy with a significantly higher number of true positives and negatives on multiple MHC class II alleles, when testing data sets from MHCPEP [1], MCHBN [2], and MHCBench [3]. Evaluation by cross validation, when segregating binders and non-binders, produced an average of 0.824 AROC for the MHCBench data sets (up from 0.756), and an average of 0.96 AROC for multiple alleles of the MHCPEP database. Conclusion - The method improves performance over existing state-of-the-art methods of MHC class II peptide binding predictions by using a custom, knowledge-based representation of peptides. Similarity scores, in contrast to a fixed-length, pocket-specific representation of amino acids, provide a flexible and powerful way of modelling MHC binding, and can easily be applied to other dynamic sequence problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In-Motes is a mobile agent middleware that generates an intelligent framework for deploying applications in Wireless Sensor Networks (WSNs). In-Motes is based on the injection of mobile agents into the network that can migrate or clone following specific rules and performing application specific tasks. By doing so, each mote is given a certain degree of perception, cognition and control, forming the basis for its intelligence. Our middleware incorporates technologies such as Linda-like tuplespaces and federated system architecture in order to obtain a high degree of collaboration and coordination for the agent society. A set of behavioral rules inspired by a community of bacterial strains is also generated as the means for robustness of the WSN. In this paper, we present In-Motes and provide a detailed evaluation of its implementation for MICA2 motes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inventory control in complex manufacturing environments encounters various sources of uncertainity and imprecision. This paper presents one fuzzy knowledge-based approach to solving the problem of order quantity determination, in the presence of uncertain demand, lead time and actual inventory level. Uncertain data are represented by fuzzy numbers, and vaguely defined relations between them are modeled by fuzzy if-then rules. The proposed representation and inference mechanism are verified using a large numbers of examples. The results of three representative cases are summarized. Finally a comparison between the developed fuzzy knowledge-based and traditional, probabilistic approaches is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advocates of ‘local food’ claim it serves to reduce food miles and greenhouse gas emissions, improve food safety and quality, strengthen local economies and enhance social capital. We critically review the philosophical and scientific rationale for this assertion, and consider whether conventional scientific approaches can help resolve the debate. We conclude that food miles are a poor indicator of the environmental and ethical impacts of food production. Only through combining spatially explicit life cycle assessment with analysis of social issues can the benefits of local food be assessed. This type of analysis is currently lacking for nearly all food chains.