44 resultados para Real-world problem


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis offers a methodology to study and design effective communication mechanisms in human activities. The methodology is focused in the management of complexity. It is argued that complexity is not something objective that can be worked out analytically, but something subjective that depends on the viewpoint. Also it is argued that while certain social contexts may inhibit, others may enhance the viewpoint's capabilities to deal with complexity. Certain organisation structures are more likely than others to allow individuals to release their potentials. Thus, the relevance of studying and designing effective organisations. The first part of the thesis offers a `cybernetic methodology' for problem solving in human activities, the second offers a `method' to study and design organisations. The cybernetics methodology discussed in this work is rooted in second order cybernetics, or the cybernetics of the observing systems (Von Foester 1979, Maturana and Varela 1980). Its main tenet is that the known properties of the real world reside in the individual and not in the world itself. This view, which puts emphasis in a, by nature, one sided and unilateral appreciation of reality, triggers the need for dialogue and conversations to construct it. The `method' to study and design organisations, it based on Beer's Viable System Model (Beer 1979, 1981, 1985). This model permits us to assess how successful is an organisation in coping with its environmental complexity, and, moreover, permits us to establish how to make more effective the responses to this complexity. These features of the model are of great significance in a world where complexity is perceived to be growing at an unthinkable pace. But, `seeing' these features of the model assumes an effective appreciation of organisational complexity; hence the need for the methodological discussions offered by the first part of the thesis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computer based discrete event simulation (DES) is one of the most commonly used aids for the design of automotive manufacturing systems. However, DES tools represent machines in extensive detail, while only representing workers as simple resources. This presents a problem when modelling systems with a highly manual work content, such as an assembly line. This paper describes research at Cranfield University, in collaboration with the Ford Motor Company, founded on the assumption that human variation is the cause of a large percentage of the disparity between simulation predictions and real world performance. The research aims to improve the accuracy and reliability of simulation prediction by including models of human factors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Latent topics derived by topic models such as Latent Dirichlet Allocation (LDA) are the result of hidden thematic structures which provide further insights into the data. The automatic labelling of such topics derived from social media poses however new challenges since topics may characterise novel events happening in the real world. Existing automatic topic labelling approaches which depend on external knowledge sources become less applicable here since relevant articles/concepts of the extracted topics may not exist in external sources. In this paper we propose to address the problem of automatic labelling of latent topics learned from Twitter as a summarisation problem. We introduce a framework which apply summarisation algorithms to generate topic labels. These algorithms are independent of external sources and only rely on the identification of dominant terms in documents related to the latent topic. We compare the efficiency of existing state of the art summarisation algorithms. Our results suggest that summarisation algorithms generate better topic labels which capture event-related context compared to the top-n terms returned by LDA. © 2014 Association for Computational Linguistics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Graph embedding is a general framework for subspace learning. However, because of the well-known outlier-sensitiveness disadvantage of the L2-norm, conventional graph embedding is not robust to outliers which occur in many practical applications. In this paper, an improved graph embedding algorithm (termed LPP-L1) is proposed by replacing L2-norm with L1-norm. In addition to its robustness property, LPP-L1 avoids small sample size problem. Experimental results on both synthetic and real-world data demonstrate these advantages. © 2009 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computer simulators of real-world processes are often computationally expensive and require many inputs. The problem of the computational expense can be handled using emulation technology; however, highly multidimensional input spaces may require more simulator runs to train and validate the emulator. We aim to reduce the dimensionality of the problem by screening the simulators inputs for nonlinear effects on the output rather than distinguishing between negligible and active effects. Our proposed method is built upon the elementary effects (EE) method for screening and uses a threshold value to separate the inputs with linear and nonlinear effects. The technique is simple to implement and acts in a sequential way to keep the number of simulator runs down to a minimum, while identifying the inputs that have nonlinear effects. The algorithm is applied on a set of simulated examples and a rabies disease simulator where we observe run savings ranging between 28% and 63% compared with the batch EE method. Supplementary materials for this article are available online.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper considers the problem of low-dimensional visualisation of very high dimensional information sources for the purpose of situation awareness in the maritime environment. In response to the requirement for human decision support aids to reduce information overload (and specifically, data amenable to inter-point relative similarity measures) appropriate to the below-water maritime domain, we are investigating a preliminary prototype topographic visualisation model. The focus of the current paper is on the mathematical problem of exploiting a relative dissimilarity representation of signals in a visual informatics mapping model, driven by real-world sonar systems. A realistic noise model is explored and incorporated into non-linear and topographic visualisation algorithms building on the approach of [9]. Concepts are illustrated using a real world dataset of 32 hydrophones monitoring a shallow-water environment in which targets are present and dynamic.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The popularity of online social media platforms provides an unprecedented opportunity to study real-world complex networks of interactions. However, releasing this data to researchers and the public comes at the cost of potentially exposing private and sensitive user information. It has been shown that a naive anonymization of a network by removing the identity of the nodes is not sufficient to preserve users’ privacy. In order to deal with malicious attacks, k -anonymity solutions have been proposed to partially obfuscate topological information that can be used to infer nodes’ identity. In this paper, we study the problem of ensuring k anonymity in time-varying graphs, i.e., graphs with a structure that changes over time, and multi-layer graphs, i.e., graphs with multiple types of links. More specifically, we examine the case in which the attacker has access to the degree of the nodes. The goal is to generate a new graph where, given the degree of a node in each (temporal) layer of the graph, such a node remains indistinguishable from other k-1 nodes in the graph. In order to achieve this, we find the optimal partitioning of the graph nodes such that the cost of anonymizing the degree information within each group is minimum. We show that this reduces to a special case of a Generalized Assignment Problem, and we propose a simple yet effective algorithm to solve it. Finally, we introduce an iterated linear programming approach to enforce the realizability of the anonymized degree sequences. The efficacy of the method is assessed through an extensive set of experiments on synthetic and real-world graphs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the most fundamental problem that we face in the graph domain is that of establishing the similarity, or alternatively the distance, between graphs. In this paper, we address the problem of measuring the similarity between attributed graphs. In particular, we propose a novel way to measure the similarity through the evolution of a continuous-time quantum walk. Given a pair of graphs, we create a derived structure whose degree of symmetry is maximum when the original graphs are isomorphic, and where a subset of the edges is labeled with the similarity between the respective nodes. With this compositional structure to hand, we compute the density operators of the quantum systems representing the evolution of two suitably defined quantum walks. We define the similarity between the two original graphs as the quantum Jensen-Shannon divergence between these two density operators, and then we show how to build a novel kernel on attributed graphs based on the proposed similarity measure. We perform an extensive experimental evaluation both on synthetic and real-world data, which shows the effectiveness the proposed approach. © 2013 Springer-Verlag.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In many e-commerce Web sites, product recommendation is essential to improve user experience and boost sales. Most existing product recommender systems rely on historical transaction records or Web-site-browsing history of consumers in order to accurately predict online users’ preferences for product recommendation. As such, they are constrained by limited information available on specific e-commerce Web sites. With the prolific use of social media platforms, it now becomes possible to extract product demographics from online product reviews and social networks built from microblogs. Moreover, users’ public profiles available on social media often reveal their demographic attributes such as age, gender, and education. In this paper, we propose to leverage the demographic information of both products and users extracted from social media for product recommendation. In specific, we frame recommendation as a learning to rank problem which takes as input the features derived from both product and user demographics. An ensemble method based on the gradient-boosting regression trees is extended to make it suitable for our recommendation task. We have conducted extensive experiments to obtain both quantitative and qualitative evaluation results. Moreover, we have also conducted a user study to gauge the performance of our proposed recommender system in a real-world deployment. All the results show that our system is more effective in generating recommendation results better matching users’ preferences than the competitive baselines.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Existing political theory, particularly which deals with justice and/or rights, has long assumed citizenship as a core concept. Noncitizenship, if it is considered at all, is generally defined merely as the negation or deprivation of citizenship. As such, it is difficult to examine successfully the status of noncitizens, obligations towards them, and the nature of their role in political systems. This article addresses this critical gap by defining the theoretical problem that noncitizenship presents and demonstrating why it is an urgent concern. It surveys the contributions to the special issue for which the article is an introduction, drawing on cross-cutting themes and debates to highlight the importance of theorising noncitizenship due to both the problematic gap that exists in the theoretical literature, and the real world problems created as a result of noncitizenship which are not currently successfully addressed. Finally, the article discusses key future directions for the theorisation of noncitizenship.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Real world search problems, characterised by nonlinearity, noise and multidimensionality, are often best solved by hybrid algorithms. Techniques embodying different necessary features are triggered at specific iterations, in response to the current state of the problem space. In the existing literature, this alternation is managed either statically (through pre-programmed policies) or dynamically, at the cost of high coupling with algorithm inner representation. We extract two design patterns for hybrid metaheuristic search algorithms, the All-Seeing Eye and the Commentator patterns, which we argue should be replaced by the more flexible and loosely coupled Simple Black Box (Two-B) and Utility-based Black Box (Three-B) patterns that we propose here. We recommend the Two-B pattern for purely fitness based hybridisations and the Three-B pattern for more generic search quality evaluation based hybridisations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the study of complex networks, vertex centrality measures are used to identify the most important vertices within a graph. A related problem is that of measuring the centrality of an edge. In this paper, we propose a novel edge centrality index rooted in quantum information. More specifically, we measure the importance of an edge in terms of the contribution that it gives to the Von Neumann entropy of the graph. We show that this can be computed in terms of the Holevo quantity, a well known quantum information theoretical measure. While computing the Von Neumann entropy and hence the Holevo quantity requires computing the spectrum of the graph Laplacian, we show how to obtain a simplified measure through a quadratic approximation of the Shannon entropy. This in turns shows that the proposed centrality measure is strongly correlated with the negative degree centrality on the line graph. We evaluate our centrality measure through an extensive set of experiments on real-world as well as synthetic networks, and we compare it against commonly used alternative measures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the eye-catching advances in sensing technologies, smart water networks have been attracting immense research interest in recent years. One of the most overarching tasks in smart water network management is the reduction of water loss (such as leaks and bursts in a pipe network). In this paper, we propose an efficient scheme to position water loss event based on water network topology. The state-of-the-art approach to this problem, however, utilizes the limited topology information of the water network, that is, only one single shortest path between two sensor locations. Consequently, the accuracy of positioning water loss events is still less desirable. To resolve this problem, our scheme consists of two key ingredients: First, we design a novel graph topology-based measure, which can recursively quantify the "average distances" for all pairs of senor locations simultaneously in a water network. This measure will substantially improve the accuracy of our positioning strategy, by capturing the entire water network topology information between every two sensor locations, yet without any sacrifice of computational efficiency. Then, we devise an efficient search algorithm that combines the "average distances" with the difference in the arrival times of the pressure variations detected at sensor locations. The viable experimental evaluations on real-world test bed (WaterWiSe@SG) demonstrate that our proposed positioning scheme can identify water loss event more accurately than the best-known competitor.