943 resultados para Chunk-based information diffusion


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many meteorological phenomena occur at different locations simultaneously. These phenomena vary temporally and spatially. It is essential to track these multiple phenomena for accurate weather prediction. Efficient analysis require high-resolution simulations which can be conducted by introducing finer resolution nested simulations, nests at the locations of these phenomena. Simultaneous tracking of these multiple weather phenomena requires simultaneous execution of the nests on different subsets of the maximum number of processors for the main weather simulation. Dynamic variation in the number of these nests require efficient processor reallocation strategies. In this paper, we have developed strategies for efficient partitioning and repartitioning of the nests among the processors. As a case study, we consider an application of tracking multiple organized cloud clusters in tropical weather systems. We first present a parallel data analysis algorithm to detect such clouds. We have developed a tree-based hierarchical diffusion method which reallocates processors for the nests such that the redistribution cost is less. We achieve this by a novel tree reorganization approach. We show that our approach exhibits up to 25% lower redistribution cost and 53% lesser hop-bytes than the processor reallocation strategy that does not consider the existing processor allocation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To assess the efficacy, with respect to participant understanding of information, of a computer-based approach to communication about complex, technical issues that commonly arise when seeking informed consent for clinical research trials. DESIGN, SETTING AND PARTICIPANTS: An open, randomised controlled study of 60 patients with diabetes mellitus, aged 27-70 years, recruited between August 2006 and October 2007 from the Department of Diabetes and Endocrinology at the Alfred Hospital and Baker IDI Heart and Diabetes Institute, Melbourne. INTERVENTION: Participants were asked to read information about a mock study via a computer-based presentation (n = 30) or a conventional paper-based information statement (n = 30). The computer-based presentation contained visual aids, including diagrams, video, hyperlinks and quiz pages. MAIN OUTCOME MEASURES: Understanding of information as assessed by quantitative and qualitative means. RESULTS: Assessment scores used to measure level of understanding were significantly higher in the group that completed the computer-based task than the group that completed the paper-based task (82% v 73%; P = 0.005). More participants in the group that completed the computer-based task expressed interest in taking part in the mock study (23 v 17 participants; P = 0.01). Most participants from both groups preferred the idea of a computer-based presentation to the paper-based statement (21 in the computer-based task group, 18 in the paper-based task group). CONCLUSIONS: A computer-based method of providing information may help overcome existing deficiencies in communication about clinical research, and may reduce costs and improve efficiency in recruiting participants for clinical trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impact of health promotion programs is related to both program effectiveness and the extent to which the program is implemented among the target population. The purpose of this dissertation was to describe the development and evaluation of a school-based program diffusion intervention designed to increase the rate of dissemination and adoption of the Child and Adolescent Trial for Cardiovascular Health, or CATCH program (recently renamed the Coordinated Approach to Child Health). ^ The first study described the process by which schools across the state of Texas spontaneously began to adopt the CATCH program after it was tested and proven effective in a multi-site randomized efficacy trial. A survey of teachers and administrator representatives of all schools on record that purchased the CATCH program, but were not involved in the efficacy trial, was used to find out who brought CATCH into the schools, how they garnered support for its adoption, why they decided to adopt the program, and what was involved in deciding to adopt. ^ The second study described how the Intervention Mapping framework guided the planning, development and implementation of a program for the diffusion of CATCH. An iterative process was used to integrate theory, literature, the experience of project staff and data from the target population into a meaningful set of program determinants and performance objectives. Proximal program objectives were specified and translated into both media and interpersonal communication strategies for program diffusion. ^ The third study assessed the effectiveness of the diffusion program in a case-comparison design. Three of the twenty Education Service Center regions in Texas were chosen, selected based on similar demographic criteria, and were followed for adoption of the CATCH curriculum. One of these regions received the full media and interpersonal channel intervention; a second received a reduced media-only intervention, and a third received no intervention. Results suggested the use of the interpersonal channels with media follow-up is an effective means to facilitate program dissemination and adoption. The media-alone condition was not effective in facilitating program adoption. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present two approaches to cluster dialogue-based information obtained by the speech understanding module and the dialogue manager of a spoken dialogue system. The purpose is to estimate a language model related to each cluster, and use them to dynamically modify the model of the speech recognizer at each dialogue turn. In the first approach we build the cluster tree using local decisions based on a Maximum Normalized Mutual Information criterion. In the second one we take global decisions, based on the optimization of the global perplexity of the combination of the cluster-related LMs. Our experiments show a relative reduction of the word error rate of 15.17%, which helps to improve the performance of the understanding and the dialogue manager modules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Witnessing the wide spread of malicious information in large networks, we develop an efficient method to detect anomalous diffusion sources and thus protect networks from security and privacy attacks. To date, most existing work on diffusion sources detection are based on the assumption that network snapshots that reflect information diffusion can be obtained continuously. However, obtaining snapshots of an entire network needs to deploy detectors on all network nodes and thus is very expensive. Alternatively, in this article, we study the diffusion sources locating problem by learning from information diffusion data collected from only a small subset of network nodes. Specifically, we present a new regression learning model that can detect anomalous diffusion sources by jointly solving five challenges, that is, unknown number of source nodes, few activated detectors, unknown initial propagation time, uncertain propagation path and uncertain propagation time delay. We theoretically analyze the strength of the model and derive performance bounds. We empirically test and compare the model using both synthetic and real-world networks to demonstrate its performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study will cross-fertilise Information Systems (IS) and Services Marketing ideas through reconceptualising the information system as a service (ISaaS). The study addresses known limitations of arguably the two most significant dependent variables in these disciplines - Information System Success or IS-Impact, and Service Quality. Planned efforts to synthesise analogous conceptions across these disciplines, are expected to force a deeper theoretical understanding of the broad notions of success, quality, value and satisfaction and their interrelations. The aims of this research are to: (1) yield a conceptually superior and more extensively validated IS success measurement model, and (2) develop and operationalise a more rigorously validated Service Quality measurement model, while extending the ‘service’ notion to ‘operational computer-based information systems in organisations’. In the development of the new models the study will address contemporary validation issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phenomenography is a qualitative research approach that seeks to explore variation in how people experience various aspects of their world. Phenomenography has been used in numerous information research studies that have explored various phenomena of interest in the library and information sphere. This paper provides an overview of the phenomenographic method and discusses key assumptions that underlie this approach to research. Aspects including data collection, data analysis and the outcomes of phenomenographic research are also detailed. The paper concludes with an illustration of how phenomenography was used in research to investigate students’ experiences of web-based information searching. The results of this research demonstrate how phenomenography can reveal variation, making it possible to develop greater understanding of the phenomenon as it was experienced, and to draw upon these experiences to improve and enhance current practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper I will explore some experience-based perspectives on information literacy research and practice. The research based understanding of what information literacy looks like to those experiencing it, is very different from the standard interpretations of information literacy as involving largely text based information searching, interpretation, evaluation and use. It also involves particular understandings of the interrelation between information and learning experiences. In following this thread of the history of information literacy I will reflect on aspects of the past, present and future of information literacy research. In each of these areas I explore experiential, especially phenomenographic approaches to information literacy and information literacy education, to reveal the unfolding understanding of people’s experience of information literacy stemming from this orientation. In addressing the past I will look in particular at the contribution of the seven faces of information literacy and some lessons learned from attending to variation in experience. I will explore important directions and insights that this history may help us to retain; including the value of understanding peoples’ information literacy experience. In addressing the present, I will introduce more recent work that adopts the key ideas of informed learning by attending to both information and learning experiences in specific contexts. I will look at some contemporary directions and key issues, including the reinvention of the phenomenographic, or relational approach to information literacy as informed learning or using information to learn. I will also provide some examples of the contribution of experiential approaches to information literacy research and practice. The evolution and development of the phenomenographic approach to information literacy, and the growing attention to a dual focus on information and learning experiences in this approach will be highlighted. Finally, in addressing the future I will return to advocacy, the recognition and pursuit of the transforming and empowering heart of information literacy; and suggest that for information literacy research, including the experiential, a turn towards the emancipatory has much to offer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Communication and information diffusion are typically difficult in situations where centralised structures may become unavailable. In this context, decentralised communication based on epidemic broadcast becomes essential. It can be seen as an opportunity-based flooding for message broadcasting within a swarm of autonomous agents, where each entity tries to share the information it possesses with its neighbours. As an example of applications for such a system, we present simulation results where agents have to coordinate to map an unknown area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To understand factors that affect brain connectivity and integrity, it is beneficial to automatically cluster white matter (WM) fibers into anatomically recognizable tracts. Whole brain tractography, based on diffusion-weighted MRI, generates vast sets of fibers throughout the brain; clustering them into consistent and recognizable bundles can be difficult as there are wide individual variations in the trajectory and shape of WM pathways. Here we introduce a novel automated tract clustering algorithm based on label fusion - a concept from traditional intensity-based segmentation. Streamline tractography generates many incorrect fibers, so our top-down approach extracts tracts consistent with known anatomy, by mapping multiple hand-labeled atlases into a new dataset. We fuse clustering results from different atlases, using a mean distance fusion scheme. We reliably extracted the major tracts from 105-gradient high angular resolution diffusion images (HARDI) of 198 young normal twins. To compute population statistics, we use a pointwise correspondence method to match, compare, and average WM tracts across subjects. We illustrate our method in a genetic study of white matter tract heritability in twins.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project proposes a framework that identifies high‐value disaster-based information from social media to facilitate key decision-making processes during natural disasters. At present it is very difficult to differentiate between information that has a high degree of disaster relevance and information that has a low degree of disaster relevance. By digitally harvesting and categorising social media conversation streams automatically, this framework identifies highly disaster-relevant information that can be used by emergency services for intelligence gathering and decision-making.