807 resultados para decentralised data fusion framework


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed and collaborative data stream mining in a mobile computing environment is referred to as Pocket Data Mining PDM. Large amounts of available data streams to which smart phones can subscribe to or sense, coupled with the increasing computational power of handheld devices motivates the development of PDM as a decision making system. This emerging area of study has shown to be feasible in an earlier study using technological enablers of mobile software agents and stream mining techniques [1]. A typical PDM process would start by having mobile agents roam the network to discover relevant data streams and resources. Then other (mobile) agents encapsulating stream mining techniques visit the relevant nodes in the network in order to build evolving data mining models. Finally, a third type of mobile agents roam the network consulting the mining agents for a final collaborative decision, when required by one or more users. In this paper, we propose the use of distributed Hoeffding trees and Naive Bayes classifers in the PDM framework over vertically partitioned data streams. Mobile policing, health monitoring and stock market analysis are among the possible applications of PDM. An extensive experimental study is reported showing the effectiveness of the collaborative data mining with the two classifers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Undeniably, anticipation plays a crucial role in cognition. By what means, to what extent, and what it achieves remain open questions. In a recent BBS target article, Clark (in press) depicts an integrative model of the brain that builds on hierarchical Bayesian models of neural processing (Rao and Ballard, 1999; Friston, 2005; Brown et al., 2011), and their most recent formulation using the free-energy principle borrowed from thermodynamics (Feldman and Friston, 2010; Friston, 2010; Friston et al., 2010). Hierarchical generative models of cognition, such as those described by Clark, presuppose the manipulation of representations and internal models of the world, in as much detail as is perceptually available. Perhaps surprisingly, Clark acknowledges the existence of a “virtual version of the sensory data” (p. 4), but with no reference to some of the historical debates that shaped cognitive science, related to the storage, manipulation, and retrieval of representations in a cognitive system (Shanahan, 1997), or accounting for the emergence of intentionality within such a system (Searle, 1980; Preston and Bishop, 2002). Instead of demonstrating how this Bayesian framework responds to these foundational questions, Clark describes the structure and the functional properties of an action-oriented, multi-level system that is meant to combine perception, learning, and experience (Niedenthal, 2007).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drought is a global problem that has far-reaching impacts and especially 47 on vulnerable populations in developing regions. This paper highlights the need for a Global Drought Early Warning System (GDEWS), the elements that constitute its underlying framework (GDEWF) and the recent progress made towards its development. Many countries lack drought monitoring systems, as well as the capacity to respond via appropriate political, institutional and technological frameworks, and these have inhibited the development of integrated drought management plans or early warning systems. The GDEWS will provide a source of drought tools and products via the GDEWF for countries and regions to develop tailored drought early warning systems for their own users. A key goal of a GDEWS is to maximize the lead time for early warning, allowing drought managers and disaster coordinators more time to put mitigation measures in place to reduce the vulnerability to drought. To address this, the GDEWF will take both a top-down approach to provide global real-time drought monitoring and seasonal forecasting, and a bottom-up approach that builds upon existing national and regional systems to provide continental to global coverage. A number of challenges must be overcome, however, before a GDEWS can become a reality, including the lack of in-situ measurement networks and modest seasonal forecast skill in many regions, and the lack of infrastructure to translate data into useable information. A set of international partners, through a series of recent workshops and evolving collaborations, has made progress towards meeting these challenges and developing a global system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Airborne lidar provides accurate height information of objects on the earth and has been recognized as a reliable and accurate surveying tool in many applications. In particular, lidar data offer vital and significant features for urban land-cover classification, which is an important task in urban land-use studies. In this article, we present an effective approach in which lidar data fused with its co-registered images (i.e. aerial colour images containing red, green and blue (RGB) bands and near-infrared (NIR) images) and other derived features are used effectively for accurate urban land-cover classification. The proposed approach begins with an initial classification performed by the Dempster–Shafer theory of evidence with a specifically designed basic probability assignment function. It outputs two results, i.e. the initial classification and pseudo-training samples, which are selected automatically according to the combined probability masses. Second, a support vector machine (SVM)-based probability estimator is adopted to compute the class conditional probability (CCP) for each pixel from the pseudo-training samples. Finally, a Markov random field (MRF) model is established to combine spatial contextual information into the classification. In this stage, the initial classification result and the CCP are exploited. An efficient belief propagation (EBP) algorithm is developed to search for the global minimum-energy solution for the maximum a posteriori (MAP)-MRF framework in which three techniques are developed to speed up the standard belief propagation (BP) algorithm. Lidar and its co-registered data acquired by Toposys Falcon II are used in performance tests. The experimental results prove that fusing the height data and optical images is particularly suited for urban land-cover classification. There is no training sample needed in the proposed approach, and the computational cost is relatively low. An average classification accuracy of 93.63% is achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considerable progress has taken place in numerical weather prediction over the last decade. It has been possible to extend predictive skills in the extra-tropics of the Northern Hemisphere during the winter from less than five days to seven days. Similar improvements, albeit on a lower level, have taken place in the Southern Hemisphere. Another example of improvement in the forecasts is the prediction of intense synoptic phenomena such as cyclogenesis which on the whole is quite successful with the most advanced operational models (Bengtsson (1989), Gadd and Kruze (1988)). A careful examination shows that there are no single causes for the improvements in predictive skill, but instead they are due to several different factors encompassing the forecasting system as a whole (Bengtsson, 1985). In this paper we will focus our attention on the role of data-assimilation and the effect it may have on reducing the initial error and hence improving the forecast. The first part of the paper contains a theoretical discussion on error growth in simple data assimilation systems, following Leith (1983). In the second part we will apply the result on actual forecast data from ECMWF. The potential for further forecast improvements within the framework of the present observing system in the two hemispheres will be discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing use of social media, applications or platforms that allow users to interact online, ensures that this environment will provide a useful source of evidence for the forensics examiner. Current tools for the examination of digital evidence find this data problematic as they are not designed for the collection and analysis of online data. Therefore, this paper presents a framework for the forensic analysis of user interaction with social media. In particular, it presents an inter-disciplinary approach for the quantitative analysis of user engagement to identify relational and temporal dimensions of evidence relevant to an investigation. This framework enables the analysis of large data sets from which a (much smaller) group of individuals of interest can be identified. In this way, it may be used to support the identification of individuals who might be ‘instigators’ of a criminal event orchestrated via social media, or a means of potentially identifying those who might be involved in the ‘peaks’ of activity. In order to demonstrate the applicability of the framework, this paper applies it to a case study of actors posting to a social media Web site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to best utilize the limited resource of medical resources, and to reduce the cost and improve the quality of medical treatment, we propose to build an interoperable regional healthcare systems among several levels of medical treatment organizations. In this paper, our approaches are as follows:(1) the ontology based approach is introduced as the methodology and technological solution for information integration; (2) the integration framework of data sharing among different organizations are proposed(3)the virtual database to realize data integration of hospital information system is established. Our methods realize the effective management and integration of the medical workflow and the mass information in the interoperable regional healthcare system. Furthermore, this research provides the interoperable regional healthcare system with characteristic of modularization, expansibility and the stability of the system is enhanced by hierarchy structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Patients do not adhere to their medicines for a host of reasons which can include their underlying beliefs as well as the quality of their interactions with healthcare professionals. One way of measuring the outcome of pharmacy adherence services is to assess patient satisfaction but no questionnaire exists that truly captures patients' experiences with these relatively new services. Objective Our objective was to develop a conceptual framework specific to patient satisfaction with a community pharmacy adherence service based on criteria used by patients themselves. Setting The study was based in community pharmacies in one large geographical area of the UK (Surrey). All the work was conducted between October 2008 and September 2010. Methods This study involved qualitative non-participant observation and semi-structured interviewing. We observed the recruitment of patients to the Medicines Use Review (MUR) service and also actual MUR consultations (7). We also interviewed patients (15). Data collection continued until no new themes were identified during analysis. We analysed interviews to firstly create a comprehensive account of themes which had significance within the transcripts, then created sub-themes within super-ordinate categories. We used a structure-process-outcome approach to develop a conceptual framework relating to patient satisfaction with the MUR. Favourable ethical opinion for this study was received from the NHS Surrey Research Ethics Committee on 2nd June 2008. Results Five super-ordinate themes linked to patient satisfaction with the MUR service were identified, including relationships with healthcare providers; attitudes towards healthcare providers; patients' experience of health, healthcare and medicines; patients' views of the MUR service; the logistics of the MUR service. In the conceptual framework, structure was conceptualised as existing relationships, environment, and time; process was conceptualised as related to recruitment and consultation stages; and outcome as two concepts of immediate patient outcomes and satisfaction on reflection. Conclusion We identified and highlighted factors that can influence patient satisfaction with the MUR service and this led to the development of a conceptual framework of patient satisfaction with the MUR service. This can form the basis for developing a questionnaire for measuring patient satisfaction with this and similar pharmacy adherence services. Impact of findings on practice * Pharmacists and researchers can access the relevant ideas presented here in relation to patient satisfaction with pharmacy adherence services. * Researcher can use the conceptual framework as a basis for measuring the quality of pharmacy adherence services. * Community pharmacists can improve the quality of healthcare they provide by realizing concepts relevant to patient satisfaction with adherence services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to develop an understanding of the current state of scientific data sharing that stakeholders could use to develop and implement effective data sharing strategies and policies. The study developed a conceptual model to describe the process of data sharing, and the drivers, barriers, and enablers that determine stakeholder engagement. The conceptual model was used as a framework to structure discussions and interviews with key members of all stakeholder groups. Analysis of data obtained from interviewees identified a number of themes that highlight key requirements for the development of a mature data sharing culture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.