920 resultados para Partial data fusion


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study aimed to explore purchases of non-prescription medicines in New Zealand. Researchers were stationed for 5 days in 12 pharmacies throughout New Zealand during June and July of 1999. A brief questionnaire was administered, for each medicine purchased, to all available purchasers aged 16 years and over. At least partial data were collected from 2,597 medicine purchases (approximately 71.2% of medicine sales). Respiratory products comprised 42% of sales. Pharmacists were involved in 19.9% of medicine sales. Pharmacy staff featured in 62.2% of 792 influences on first-time purchases. This study tested a viable method for data collection and yielded valuable pharmaceutical marketing data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An approach and strategy for automatic detection of buildings from aerial images using combined image analysis and interpretation techniques is described in this paper. It is undertaken in several steps. A dense DSM is obtained by stereo image matching and then the results of multi-band classification, the DSM, and Normalized Difference Vegetation Index (NDVI) are used to reveal preliminary building interest areas. From these areas, a shape modeling algorithm has been used to precisely delineate their boundaries. The Dempster-Shafer data fusion technique is then applied to detect buildings from the combination of three data sources by a statistically-based classification. A number of test areas, which include buildings of different sizes, shape, and roof color have been investigated. The tests are encouraging and demonstrate that all processes in this system are important for effective building detection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A reliable perception of the real world is a key-feature for an autonomous vehicle and the Advanced Driver Assistance Systems (ADAS). Obstacles detection (OD) is one of the main components for the correct reconstruction of the dynamic world. Historical approaches based on stereo vision and other 3D perception technologies (e.g. LIDAR) have been adapted to the ADAS first and autonomous ground vehicles, after, providing excellent results. The obstacles detection is a very broad field and this domain counts a lot of works in the last years. In academic research, it has been clearly established the essential role of these systems to realize active safety systems for accident prevention, reflecting also the innovative systems introduced by industry. These systems need to accurately assess situational criticalities and simultaneously assess awareness of these criticalities by the driver; it requires that the obstacles detection algorithms must be reliable and accurate, providing: a real-time output, a stable and robust representation of the environment and an estimation independent from lighting and weather conditions. Initial systems relied on only one exteroceptive sensor (e.g. radar or laser for ACC and camera for LDW) in addition to proprioceptive sensors such as wheel speed and yaw rate sensors. But, current systems, such as ACC operating at the entire speed range or autonomous braking for collision avoidance, require the use of multiple sensors since individually they can not meet these requirements. It has led the community to move towards the use of a combination of them in order to exploit the benefits of each one. Pedestrians and vehicles detection are ones of the major thrusts in situational criticalities assessment, still remaining an active area of research. ADASs are the most prominent use case of pedestrians and vehicles detection. Vehicles should be equipped with sensing capabilities able to detect and act on objects in dangerous situations, where the driver would not be able to avoid a collision. A full ADAS or autonomous vehicle, with regard to pedestrians and vehicles, would not only include detection but also tracking, orientation, intent analysis, and collision prediction. The system detects obstacles using a probabilistic occupancy grid built from a multi-resolution disparity map. Obstacles classification is based on an AdaBoost SoftCascade trained on Aggregate Channel Features. A final stage of tracking and fusion guarantees stability and robustness to the result.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Floods represent the most devastating natural hazards in the world, affecting more people and causing more property damage than any other natural phenomena. One of the important problems associated with flood monitoring is flood extent extraction from satellite imagery, since it is impractical to acquire the flood area through field observations. This paper presents a method to flood extent extraction from synthetic-aperture radar (SAR) images that is based on intelligent computations. In particular, we apply artificial neural networks, self-organizing Kohonen’s maps (SOMs), for SAR image segmentation and classification. We tested our approach to process data from three different satellite sensors: ERS-2/SAR (during flooding on Tisza river, Ukraine and Hungary, 2001), ENVISAT/ASAR WSM (Wide Swath Mode) and RADARSAT-1 (during flooding on Huaihe river, China, 2007). Obtained results showed the efficiency of our approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article discusses implications of participant withdrawal for inductive research. I describe and analyze how a third of my participants withdrew from a grounded theory study. I position my example, ensuing issues, and potential solutions as reflective of inductive methodologies as a whole. The crux of the problem is the disruption inflicted by withdrawal on inductive processes of generating knowledge. I examine the subsequent methodological and ethical issues in trying to determine the best course of action following withdrawal. I suggest three potential options for researchers: Continuing the study with partial data, continuing the study with all data, and discontinuing the study. Motivated by my experience, and wider theoretical considerations, I present several suggestions and questions, with the aim of supporting researchers in determining the best course of action for their individual field circumstances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantitative analysis of solid-state processes from isothermal microcalorimetric data is straightforward if data for the total process have been recorded and problematic (in the more likely case) when they have not. Data are usually plotted as a function of fraction reacted (α); for calorimetric data, this requires knowledge of the total heat change (Q) upon completion of the process. Determination of Q is difficult in cases where the process is fast (initial data missing) or slow (final data missing). Here we introduce several mathematical methods that allow the direct calculation of Q by selection of data points when only partial data are present, based on analysis with the Pérez-Maqueda model. All methods in addition allow direct determination of the reaction mechanism descriptors m and n and from this the rate constant, k. The validity of the methods is tested with the use of simulated calorimetric data, and we introduce a graphical method for generating solid-state power-time data. The methods are then applied to the crystallization of indomethacin from a glass. All methods correctly recovered the total reaction enthalpy (16.6 J) and suggested that the crystallization followed an Avrami model. The rate constants for crystallization were determined to be 3.98 × 10-6, 4.13 × 10-6, and 3.98 × 10 -6 s-1 with methods 1, 2, and 3, respectively. © 2010 American Chemical Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Location systems have become increasingly part of people's lives. For outdoor environments, GPS appears as standard technology, widely disseminated and used. However, people usually spend most of their daily time in indoor environments, such as: hospitals, universities, factories, buildings, etc. In these environments, GPS does not work properly causing an inaccurate positioning. Currently, to perform the location of people or objects in indoor environments no single technology could reproduce for indoors the same result achieved by GPS for outdoors environments. Due to this, it is necessary to consider use of information from multiple sources using diferent technologies. Thus, this work aims to build an Adaptable Platform for Indoor location. Based on this goal, the IndoLoR platform is proposed. This platform aims to allow information reception from diferent sources, data processing, data fusion, data storage and data retrieval for the indoor location context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Location systems have become increasingly part of people's lives. For outdoor environments, GPS appears as standard technology, widely disseminated and used. However, people usually spend most of their daily time in indoor environments, such as: hospitals, universities, factories, buildings, etc. In these environments, GPS does not work properly causing an inaccurate positioning. Currently, to perform the location of people or objects in indoor environments no single technology could reproduce for indoors the same result achieved by GPS for outdoors environments. Due to this, it is necessary to consider use of information from multiple sources using diferent technologies. Thus, this work aims to build an Adaptable Platform for Indoor location. Based on this goal, the IndoLoR platform is proposed. This platform aims to allow information reception from diferent sources, data processing, data fusion, data storage and data retrieval for the indoor location context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To project the future development of the soil organic carbon (SOC) storage in permafrost environments, the spatial and vertical distribution of key soil properties and their landscape controls needs to be understood. This article reports findings from the Arctic Lena River Delta where we sampled 50 soil pedons. These were classified according to the U.S.D.A. Soil Taxonomy and fall mostly into the Gelisol soil order used for permafrost-affected soils. Soil profiles have been sampled for the active layer (mean depth 58±10 cm) and the upper permafrost to one meter depth. We analyze SOC stocks and key soil properties, i.e. C%, N%, C/N, bulk density, visible ice and water content. These are compared for different landscape groupings of pedons according to geomorphology, soil and land cover and for different vertical depth increments. High vertical resolution plots are used to understand soil development. These show that SOC storage can be highly variable with depth. We recommend the treatment of permafrost-affected soils according to subdivisions into: the surface organic layer, mineral subsoil in the active layer, organic enriched cryoturbated or buried horizons and the mineral subsoil in the permafrost. The major geomorphological units of a subregion of the Lena River Delta were mapped with a land form classification using a data-fusion approach of optical satellite imagery and digital elevation data to upscale SOC storage. Landscape mean SOC storage is estimated to 19.2±2.0 kg C/m**2. Our results show that the geomorphological setting explains more soil variability than soil taxonomy classes or vegetation cover. The soils from the oldest, Pleistocene aged, unit of the delta store the highest amount of SOC per m**2 followed by the Holocene river terrace. The Pleistocene terrace affected by thermal-degradation, the recent floodplain and bare alluvial sediments store considerably less SOC in descending order.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Artificial immune systems have previously been applied to the problem of intrusion detection. The aim of this research is to develop an intrusion detection system based on the function of Dendritic Cells (DCs). DCs are antigen presenting cells and key to the activation of the human immune system, behaviour which has been abstracted to form the Dendritic Cell Algorithm (DCA). In algorithmic terms, individual DCs perform multi-sensor data fusion, asynchronously correlating the fused data signals with a secondary data stream. Aggregate output of a population of cells is analysed and forms the basis of an anomaly detection system. In this paper the DCA is applied to the detection of outgoing port scans using TCP SYN packets. Results show that detection can be achieved with the DCA, yet some false positives can be encountered when simultaneously scanning and using other network services. Suggestions are made for using adaptive signals to alleviate this uncovered problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Artificial immune systems, more specifically the negative selection algorithm, have previously been applied to intrusion detection. The aim of this research is to develop an intrusion detection system based on a novel concept in immunology, the Danger Theory. Dendritic Cells (DCs) are antigen presenting cells and key to the activation of the human immune system. DCs perform the vital role of combining signals from the host tissue and correlate these signals with proteins known as antigens. In algorithmic terms, individual DCs perform multi-sensor data fusion based on time-windows. The whole population of DCs asynchronously correlates the fused signals with a secondary data stream. The behaviour of human DCs is abstracted to form the DC Algorithm (DCA), which is implemented using an immune inspired framework, libtissue. This system is used to detect context switching for a basic machine learning dataset and to detect outgoing portscans in real-time. Experimental results show a significant difference between an outgoing portscan and normal traffic.