854 resultados para data gathering algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Online Social Network (OSN) services provided by Internet companies bring people together to chat, share the information, and enjoy the information. Meanwhile, huge amounts of data are generated by those services (they can be regarded as the social media ) every day, every hour, even every minute, and every second. Currently, researchers are interested in analyzing the OSN data, extracting interesting patterns from it, and applying those patterns to real-world applications. However, due to the large-scale property of the OSN data, it is difficult to effectively analyze it. This dissertation focuses on applying data mining and information retrieval techniques to mine two key components in the social media data — users and user-generated contents. Specifically, it aims at addressing three problems related to the social media users and contents: (1) how does one organize the users and the contents? (2) how does one summarize the textual contents so that users do not have to go over every post to capture the general idea? (3) how does one identify the influential users in the social media to benefit other applications, e.g., Marketing Campaign? The contribution of this dissertation is briefly summarized as follows. (1) It provides a comprehensive and versatile data mining framework to analyze the users and user-generated contents from the social media. (2) It designs a hierarchical co-clustering algorithm to organize the users and contents. (3) It proposes multi-document summarization methods to extract core information from the social network contents. (4) It introduces three important dimensions of social influence, and a dynamic influence model for identifying influential users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The convex hull describes the extent or shape of a set of data and is used ubiquitously in computational geometry. Common algorithms to construct the convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n) time. However, it is often the case that a heuristic procedure is applied to reduce the original set of n points to a set of s < n points which contains the hull and so accelerates the final hull finding procedure. We present an algorithm to precondition data before building a 2D convex hull with integer coordinates, with three distinct advantages. First, for all practical purposes, it is linear; second, no explicit sorting of data is required and third, the reduced set of s points is constructed such that it forms an ordered set that can be directly pipelined into an O(n) time convex hull algorithm. Under these criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex hull (approximately O(n)) for an arbitrary set of points. The paper empirically evaluates and quantifies the acceleration generated by the method against the most common convex hull algorithms. An extra acceleration of at least four times when compared to previous existing preconditioning methods is found from experiments on a dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research paper presents the work on feature recognition, tool path data generation and integration with STEP-NC (AP-238 format) for features having Free form / Irregular Contoured Surface(s) (FICS). Initially, the FICS features are modelled / imported in UG CAD package and a closeness index is generated. This is done by comparing the FICS features with basic B-Splines / Bezier curves / surfaces. Then blending functions are caculated by adopting convolution theorem. Based on the blending functions, contour offsett tool paths are generated and simulated for 5 axis milling environment. Finally, the tool path (CL) data is integrated with STEP-NC (AP-238) format. The tool path algorithm and STEP- NC data is tested with various industrial parts through an automated UFUNC plugin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work was to develop a novel dimensionality reduction technique as a part of an integrated pattern recognition solution capable of identifying adulterants such as hazelnut oil in extra virgin olive oil at low percentages based on spectroscopic chemical fingerprints. A novel Continuous Locality Preserving Projections (CLPP) technique is proposed which allows the modelling of the continuous nature of the produced in-house admixtures as data series instead of discrete points. The maintenance of the continuous structure of the data manifold enables the better visualisation of this examined classification problem and facilitates the more accurate utilisation of the manifold for detecting the adulterants. The performance of the proposed technique is validated with two different spectroscopic techniques (Raman and Fourier transform infrared, FT-IR). In all cases studied, CLPP accompanied by k-Nearest Neighbors (kNN) algorithm was found to outperform any other state-of-the-art pattern recognition techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze a real data set pertaining to reindeer fecal pellet-group counts obtained from a survey conducted in a forest area in northern Sweden. In the data set, over 70% of counts are zeros, and there is high spatial correlation. We use conditionally autoregressive random effects for modeling of spatial correlation in a Poisson generalized linear mixed model (GLMM), quasi-Poisson hierarchical generalized linear model (HGLM), zero-inflated Poisson (ZIP), and hurdle models. The quasi-Poisson HGLM allows for both under- and overdispersion with excessive zeros, while the ZIP and hurdle models allow only for overdispersion. In analyzing the real data set, we see that the quasi-Poisson HGLMs can perform better than the other commonly used models, for example, ordinary Poisson HGLMs, spatial ZIP, and spatial hurdle models, and that the underdispersed Poisson HGLMs with spatial correlation fit the reindeer data best. We develop R codes for fitting these models using a unified algorithm for the HGLMs. Spatial count response with an extremely high proportion of zeros, and underdispersion can be successfully modeled using the quasi-Poisson HGLM with spatial random effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral sensors are being developed for remote sensing applications. These sensors produce huge data volumes which require faster processing and analysis tools. Vertex component analysis (VCA) has become a very useful tool to unmix hyperspectral data. It has been successfully used to determine endmembers and unmix large hyperspectral data sets without the use of any a priori knowledge of the constituent spectra. Compared with other geometric-based approaches VCA is an efficient method from the computational point of view. In this paper we introduce new developments for VCA: 1) a new signal subspace identification method (HySime) is applied to infer the signal subspace where the data set live. This step also infers the number of endmembers present in the data set; 2) after the projection of the data set onto the signal subspace, the algorithm iteratively projects the data set onto several directions orthogonal to the subspace spanned by the endmembers already determined. The new endmember signature corresponds to these extreme of the projections. The capability of VCA to unmix large hyperspectral scenes (real or simulated), with low computational complexity, is also illustrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we develop a fast implementation of an hyperspectral coded aperture (HYCA) algorithm on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems, which includes a wide variety of devices, from dense multicore systems from major manufactures such as Intel or ARM to new accelerators such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), the Intel Xeon Phi and other custom devices. Our proposed implementation of HYCA significantly reduces its computational cost. Our experiments have been conducted using simulated data and reveal considerable acceleration factors. This kind of implementations with the same descriptive language on different architectures are very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The last decades have been characterized by a continuous adoption of IT solutions in the healthcare sector, which resulted in the proliferation of tremendous amounts of data over heterogeneous systems. Distinct data types are currently generated, manipulated, and stored, in the several institutions where patients are treated. The data sharing and an integrated access to this information will allow extracting relevant knowledge that can lead to better diagnostics and treatments. This thesis proposes new integration models for gathering information and extracting knowledge from multiple and heterogeneous biomedical sources. The scenario complexity led us to split the integration problem according to the data type and to the usage specificity. The first contribution is a cloud-based architecture for exchanging medical imaging services. It offers a simplified registration mechanism for providers and services, promotes remote data access, and facilitates the integration of distributed data sources. Moreover, it is compliant with international standards, ensuring the platform interoperability with current medical imaging devices. The second proposal is a sensor-based architecture for integration of electronic health records. It follows a federated integration model and aims to provide a scalable solution to search and retrieve data from multiple information systems. The last contribution is an open architecture for gathering patient-level data from disperse and heterogeneous databases. All the proposed solutions were deployed and validated in real world use cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.