98 resultados para Chunk-based information diffusion

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The global diffusion of epidemics, computer viruses, and rumors causes great damage to our society. It is critical to identify the diffusion sources and timely quarantine them. However, most methods proposed so far are unsuitable for diffusion with multiple sources because of the high computational cost and the complex spatiotemporal diffusion processes. In this paper, based on the knowledge of infected nodes and their connections, we propose a novel method to identify multiple diffusion sources, which can address three main issues in this area: 1) how many sources are there? 2) where did the diffusion emerge? and 3) when did the diffusion break out? We first derive an optimization formulation for multi-source identification problem. This is based on altering the original network into a new network concerning two key elements: 1) propagation probability and 2) the number of hops between nodes. Experiments demonstrate that the altered network can accurately reflect the complex diffusion processes with multiple sources. Second, we derive a fast method to optimize the formulation. It has been proved that the proposed method is convergent and the computational complexity is O(mn log α) , where α = α (m,n) is the slowly growing inverse-Ackermann function, n is the number of infected nodes, and m is the number of edges connecting them. Finally, we introduce an efficient algorithm to estimate the spreading time and the number of diffusion sources. To evaluate the proposed method, we compare the proposed method with many competing methods in various real-world network topologies. Our method shows significant advantages in the estimation of multiple sources and the prediction of spreading time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter overviews the existing methods of requirements analysis as prescribed by some of the best-known web-development methods. It also discusses the pre-eminent importance of stakeholder analysis, identification of stakeholder views and concerns, and the processes governing elicitation of web systems requirements. The chapter finally derives a model of concern-driven requirements evolution from several case studies undertaken in the area of web-enabled employee service systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the conduct of a research project into influences on the use of management accounting information, the use of activity-based techniques and information in two British banks was studied by the application of grounded theory principles. Juxtaposition of these two case studies reveals insights about the managers' significantly different experiences of ongoing applications, and the different outcomes of implementation that may arise, despite commonality in the organization and industry environment. This paper presents these two case studies, highlights the similarities and differences between them, and draws some conclusions about the causes of the differences. Factors that can be managed to achieve a greater use of these particular management accounting techniques, and the information they generate, are revealed. In particular, the findings suggest that the introduction of transfer charging between the bank's internal units highlights the need for activity-based techniques, and that education, communication and implementor support are vital, both for implementation success and for the widespread continuing use of the resultant applications. Further, between the two cases the greatest consensus was found in a common concern about the amount of detail in the databank and reports.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a Web-based information system for promoting the cascading utilisation of construction materials in order to mitigate the increasing environmental pressure by the construction industry. First, this paper points out me weaknesses of current waste material exchange systems. Then, a new approach is introduced to reuse demolished materials, by which the utilisation of demolished materials may be ascertained before the demolition is actually produced.. Information technologies, including web-based intelligent and distributed systems, are applied to actua1ise this approach. Finally, the development and implementation of the system is described in detail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An understanding by support organisations of the key factors enabling successful enterprise after-sales customer support provision when using Web-based Selfservice Systems (WSSs) is essential to making  improvements in such systems. This paper reports key stakeholder-oriented findings from an interpretive study of critical success factors (CSFs) for the transfer of after-sales support-oriented knowledge from an information technology (IT) service provider to enterprise customers when a WSS is used. The findings suggest that researchers and practitioners should consider WSSs within a complex network of service providers, business partners and customer firms. The paper also clearly points to a need for support organisations to engage in greater collaboration and integration of WSSs with enterprise customers and business partners.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background : General Practitioners and community nurses rely on easily accessible, evidence-based online information to guide practice. To date, the methods that underpin the scoping of user-identified online information needs in palliative care have remained under-explored. This paper describes the benefits and challenges of a collaborative approach involving users and experts that informed the first stage of the development of a palliative care website.

Method : The action research-inspired methodology included a panel assessment of an existing palliative care website based in Victoria, Australia; a pre-development survey (n = 197) scoping potential audiences and palliative care information needs; working parties conducting a needs analysis about necessary information content for a redeveloped website targeting health professionals and caregivers/patients; an iterative evaluation process involving users and experts; as well as a final evaluation survey (n = 166).

Results : Involving users in the identification of content and links for a palliative care website is time-consuming and requires initial resources, strong networking skills and commitment. However, user participation provided crucial information that led to the widened the scope of the website audience and guided the development and testing of the website. The needs analysis underpinning the project suggests that palliative care peak bodies need to address three distinct audiences (clinicians, allied health professionals as well as patients and their caregivers).

Conclusion :
Web developers should pay close attention to the content, language, and accessibility needs of these groups. Given the substantial cost associated with the maintenance of authoritative health information sites, the paper proposes a more collaborative development in which users can be engaged in the definition of content to ensure relevance and responsiveness, and to eliminate unnecessary detail. Access to volunteer networks forms an integral part of such an approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent emergence of intelligent agent technology and advances in information gathering have been the important steps forward in efficiently managing and using the vast amount of information now available on the Web to make informed decisions. There are, however, still many problems that need to be overcome in the information gathering research arena to enable the delivery of relevant information required by end users. Good decisions cannot be made without sufficient, timely, and correct information. Traditionally it is said that knowledge is power, however, nowadays sufficient, timely, and correct information is power. So gathering relevant information to meet user information needs is the crucial step for making good decisions. The ideal goal of information gathering is to obtain only the information that users need (no more and no less). However, the volume of information available, diversity formats of information, uncertainties of information, and distributed locations of information (e.g. World Wide Web) hinder the process of gathering the right information to meet the user needs. Specifically, two fundamental issues in regard to efficiency of information gathering are mismatch and overload. The mismatch means some information that meets user needs has not been gathered (or missed out), whereas, the overload means some gathered information is not what users need. Traditional information retrieval has been developed well in the past twenty years. The introduction of the Web has changed people's perceptions of information retrieval. Usually, the task of information retrieval is considered to have the function of leading the user to those documents that are relevant to his/her information needs. The similar function in information retrieval is to filter out the irrelevant documents (or called information filtering). Research into traditional information retrieval has provided many retrieval models and techniques to represent documents and queries. Nowadays, information is becoming highly distributed, and increasingly difficult to gather. On the other hand, people have found a lot of uncertainties that are contained in the user information needs. These motivate the need for research in agent-based information gathering. Agent-based information systems arise at this moment. In these kinds of systems, intelligent agents will get commitments from their users and act on the users behalf to gather the required information. They can easily retrieve the relevant information from highly distributed uncertain environments because of their merits of intelligent, autonomy and distribution. The current research for agent-based information gathering systems is divided into single agent gathering systems, and multi-agent gathering systems. In both research areas, there are still open problems to be solved so that agent-based information gathering systems can retrieve the uncertain information more effectively from the highly distributed environments. The aim of this thesis is to research the theoretical framework for intelligent agents to gather information from the Web. This research integrates the areas of information retrieval and intelligent agents. The specific research areas in this thesis are the development of an information filtering model for single agent systems, and the development of a dynamic belief model for information fusion for multi-agent systems. The research results are also supported by the construction of real information gathering agents (e.g., Job Agent) for the Internet to help users to gather useful information stored in Web sites. In such a framework, information gathering agents have abilities to describe (or learn) the user information needs, and act like users to retrieve, filter, and/or fuse the information. A rough set based information filtering model is developed to address the problem of overload. The new approach allows users to describe their information needs on user concept spaces rather than on document spaces, and it views a user information need as a rough set over the document space. The rough set decision theory is used to classify new documents into three regions: positive region, boundary region, and negative region. Two experiments are presented to verify this model, and it shows that the rough set based model provides an efficient approach to the overload problem. In this research, a dynamic belief model for information fusion in multi-agent environments is also developed. This model has a polynomial time complexity, and it has been proven that the fusion results are belief (mass) functions. By using this model, a collection fusion algorithm for information gathering agents is presented. The difficult problem for this research is the case where collections may be used by more than one agent. This algorithm, however, uses the technique of cooperation between agents, and provides a solution for this difficult problem in distributed information retrieval systems. This thesis presents the solutions to the theoretical problems in agent-based information gathering systems, including information filtering models, agent belief modeling, and collection fusions. It also presents solutions to some of the technical problems in agent-based information systems, such as document classification, the architecture for agent-based information gathering systems, and the decision in multiple agent environments. Such kinds of information gathering agents will gather relevant information from highly distributed uncertain environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Farmers living in rural villages of Sri Lanka do not have proper access to information to make informed decisions about their livelihoods and as a result they face constant hardships in their lives. They use mobile phones to communicate but these are not currently connected to the Internet. We are investigating how to provide personalized information to farmers with the aim of empowering them to make informed decisions and take appropriate actions. In this paper we propose an empowerment model that has been designed to achieve their goals which have been identified using a scenario-based approach. The model examines several empowerment processes and supporting tools that would help them to achieve their goals with the hope that they would have an increased sense of control, self-efficacy, knowledge and competency. This empowerment model is applied to the development of a mobile based information system that is being developed by an international collaborative research group to address the issues of the farmers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently effective connectivity studies have gained significant attention among the neuroscience community as Electroencephalography (EEG) data with a high time resolution can give us a wider understanding of the information flow within the brain. Among other tools used in effective connectivity analysis Granger Causality (GC) has found a prominent place. The GC analysis, based on strictly causal multivariate autoregressive (MVAR) models does not account for the instantaneous interactions among the sources. If instantaneous interactions are present, GC based on strictly causal MVAR will lead to erroneous conclusions on the underlying information flow. Thus, the work presented in this paper applies an extended MVAR (eMVAR) model that accounts for the zero lag interactions. We propose a constrained adaptive Kalman filter (CAKF) approach for the eMVAR model identification and demonstrate that this approach performs better than the short time windowing-based adaptive estimation when applied to information flow analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To assess the efficacy, with respect to participant understanding of information, of a computer-based approach to communication about complex, technical issues that commonly arise when seeking informed consent for clinical research trials. DESIGN, SETTING AND PARTICIPANTS: An open, randomised controlled study of 60 patients with diabetes mellitus, aged 27-70 years, recruited between August 2006 and October 2007 from the Department of Diabetes and Endocrinology at the Alfred Hospital and Baker IDI Heart and Diabetes Institute, Melbourne. INTERVENTION: Participants were asked to read information about a mock study via a computer-based presentation (n = 30) or a conventional paper-based information statement (n = 30). The computer-based presentation contained visual aids, including diagrams, video, hyperlinks and quiz pages. MAIN OUTCOME MEASURES: Understanding of information as assessed by quantitative and qualitative means. RESULTS: Assessment scores used to measure level of understanding were significantly higher in the group that completed the computer-based task than the group that completed the paper-based task (82% v 73%; P = 0.005). More participants in the group that completed the computer-based task expressed interest in taking part in the mock study (23 v 17 participants; P = 0.01). Most participants from both groups preferred the idea of a computer-based presentation to the paper-based statement (21 in the computer-based task group, 18 in the paper-based task group). CONCLUSIONS: A computer-based method of providing information may help overcome existing deficiencies in communication about clinical research, and may reduce costs and improve efficiency in recruiting participants for clinical trials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Witnessing the wide spread of malicious information in large networks, we develop an efficient method to detect anomalous diffusion sources and thus protect networks from security and privacy attacks. To date, most existing work on diffusion sources detection are based on the assumption that network snapshots that reflect information diffusion can be obtained continuously. However, obtaining snapshots of an entire network needs to deploy detectors on all network nodes and thus is very expensive. Alternatively, in this article, we study the diffusion sources locating problem by learning from information diffusion data collected from only a small subset of network nodes. Specifically, we present a new regression learning model that can detect anomalous diffusion sources by jointly solving five challenges, that is, unknown number of source nodes, few activated detectors, unknown initial propagation time, uncertain propagation path and uncertain propagation time delay. We theoretically analyze the strength of the model and derive performance bounds. We empirically test and compare the model using both synthetic and real-world networks to demonstrate its performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advent of the World Wide Web (WWW) and the emergence of Internet commerce have given rise to the web as a medium of information exchange. In recent years, the phenomenon has affected the realm of transaction processing systems, as organizations are moving from designing web pages for marketing purposes, to web-based applications that support business-to-business (WEB) and business-to-consumer (B2C) interactions, integrated with databases and other back-end systems (Isakowitz, Bieber et al., 1998). Furthermore, web-enabled applications are increasingly being used to facilitate transactions even between various business units within a single enterprise. Examples of some of the more popular web-enabled applications in use today include airline reservation systems, internet banking, student enrollment systems in universities, and Human Resource (HR) and payroll systems. The prime motive behind the adoption of web-enabled applications are productivity gains due to reduced processing time, decrease in the usage of paper-based documentation and conventional modes of communication (such as letters, fax, or telephone), and improved quality of services to clients. Indeed, web-based solutions are commonly referred to as customer-centric (Li, 2000), which means that they provide user interfaces that do not necessitate high level of computer proficiency. Thus, organizations implement such systems to streamline routine transactions and gain strategic benefits in the process (Nambisan & Wang, 1999), though the latter are to be expected in the long-term. Notwithstanding the benefits of web technology adoption, the web has ample share of challenges for initiators and developers. Many of these challenges are associated with the unique nature of web-enabled applications. Research in the area of web-enabled information systems has revealed several differences with traditional applications. These differences exist with regards to system development methodology, stakeholder involvement, tasks, and technology (Nazareth, 1998). According to Fraternali (1999), web applications are commonly developed using an evolutionary prototyping approach, whereby the simplified version of the application is deployed as a pilot first, in order to gather user feedback. Thus, web-enabled applications typically undergo continuous refinement and evolution (Ginige, 1998; Nazareth, 1998; Siau, 1998; Standing, 2001). Prototype-based development also leads web-enabled information systems to have much shorter development life cycles, but which, unlike traditional applications, are regrettably developed in a rather adhoc fashion (Carstensen & Vogelsang, 2001). However, the principal difference between the two kinds of applications lies in the broad and diverse group of stakeholders associated with web-based information systems (Gordijn, Akkermans, et al., 2000; Russo, 2000; Earl & Khan, 2001; Carter, 2002; Hasselbring, 2002; Standing, 2002; Stevens & Timbrell, 2002). Stakeholders, or organizational members participating in a common business process (Freeman, 1984), vary in their computer competency, business knowledge, language and culture. This diversity is capable of causing conflict between different stakeholder groups with regards to the establishment of system requirements (Pouloudi & Whitley, 1997; Stevens & Timbrell, 2002). Since, web-based systems transcend organizational, departmental, and even national boundaries, the issue of culture poses a significant challenge to the web systems’ initiators and developers (Miles & Snow, 1992; Kumar & van Dissel, 1996; Pouloudi & Whitley, 1996; Li & Williams, 1999).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to determine the feasibility and acceptability of a referral and outcall programme from a telephone-based information and support service, for men newly diagnosed with colorectal or prostate cancer. A block randomized controlled trial was performed involving 100 newly diagnosed colorectal and prostate cancer patients. Patients were referred to the Cancer Information Support Service (CISS) through clinicians at diagnosis. Clinicians were randomized into one of three conditions. Active referral 1: specialist referral with four CISS outcalls: (1) ≤1 week of diagnosis; (2) at 6 weeks; (3) 3 months; and (4) 6 months post diagnosis. Active referral 2: specialist referral with one CISS outcall ≤1 week of diagnosis. Passive referral: specialist recommended patient contacts CISS, but contact at the patient's initiative. Patients completed research questionnaires at study entry (before CISS contact), then 4 and 7 months post diagnosis. Overall, 96% of participants reported a positive experience with the referral process; 87% reported they were not concerned about receiving the calls; and 84% indicated the timing of the calls was helpful. In conclusion, the referral and outcall programme was achievable and acceptable for men newly diagnosed with colorectal or prostate cancer.