912 resultados para Cluster Analysis. Information Theory. Entropy. Cross Information Potential. Complex Data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

3rd SMTDA Conference Proceedings, 11-14 June 2014, Lisbon Portugal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, one of the biggest challenges for the field of data mining is to perform cluster analysis on complex data. Several techniques have been proposed but, in general, they can only achieve good results within specific areas providing no consensus of what would be the best way to group this kind of data. In general, these techniques fail due to non-realistic assumptions about the true probability distribution of the data. Based on this, this thesis proposes a new measure based on Cross Information Potential that uses representative points of the dataset and statistics extracted directly from data to measure the interaction between groups. The proposed approach allows us to use all advantages of this information-theoretic descriptor and solves the limitations imposed on it by its own nature. From this, two cost functions and three algorithms have been proposed to perform cluster analysis. As the use of Information Theory captures the relationship between different patterns, regardless of assumptions about the nature of this relationship, the proposed approach was able to achieve a better performance than the main algorithms in literature. These results apply to the context of synthetic data designed to test the algorithms in specific situations and to real data extracted from problems of different fields

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops a general theory of land inheritance rules. We distinguish between two classes of rules: those that allow a testator discretion in disposing of his land (like a best-qualified rule), and those that constrain his choice (like primogeniture). The primary benefit of the latter is to prevent rent seeking by heirs, but the cost is that testators cannot make use of information about the relative abilities of his heirs to manage the land. We also account for the impact of scale economies in land use. We conclude by offering some empirical tests of the model using a cross-cultural sample of societies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: In 2011, a patient was admitted to our hospital with acute schistosomiasis after having returned from Madagascar and having bathed at the Lily waterfalls. On the basis of this patient's indication, infection was suspected in 41 other subjects. This study investigated (1) the knowledge of the travelers about the risks of schistosomiasis and their related behavior to evaluate the appropriateness of prevention messages and (2) the diagnostic workup of symptomatic travelers by general practitioners to evaluate medical care of travelers with a history of freshwater exposure in tropical areas. METHODS: A questionnaire was sent to the 42 travelers with potential exposure to schistosomiasis. It focused on pre-travel knowledge of the disease, bathing conditions, clinical presentation, first suspected diagnosis, and treatment. RESULTS: Of the 42 questionnaires, 40 (95%) were returned, among which 37 travelers (92%) reported an exposure to freshwater, and 18 (45%) were aware of the risk of schistosomiasis. Among these latter subjects, 16 (89%) still reported an exposure to freshwater. Serology was positive in 28 (78%) of 36 exposed subjects at least 3 months after exposure. Of the 28 infected travelers, 23 (82%) exhibited symptoms and 16 (70%) consulted their general practitioner before the information about the outbreak had spread, but none of these patients had a serology for schistosomiasis done during the first consultation. CONCLUSIONS: The usual prevention message of avoiding freshwater contact when traveling in tropical regions had no impact on the behavior of these travelers, who still went swimming at the Lily waterfalls. This prevention message should, therefore, be either modified or abandoned. The clinical presentation of acute schistosomiasis is often misleading. General practitioners should at least request an eosinophil count, when confronted with a returning traveler with fever. If eosinophilia is detected, it should prompt the search for a parasitic disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microarray technology is a high-throughput method for genotyping and gene expression profiling. Limited sensitivity and specificity are one of the essential problems for this technology. Most of existing methods of microarray data analysis have an apparent limitation for they merely deal with the numerical part of microarray data and have made little use of gene sequence information. Because it's the gene sequences that precisely define the physical objects being measured by a microarray, it is natural to make the gene sequences an essential part of the data analysis. This dissertation focused on the development of free energy models to integrate sequence information in microarray data analysis. The models were used to characterize the mechanism of hybridization on microarrays and enhance sensitivity and specificity of microarray measurements. ^ Cross-hybridization is a major obstacle factor for the sensitivity and specificity of microarray measurements. In this dissertation, we evaluated the scope of cross-hybridization problem on short-oligo microarrays. The results showed that cross hybridization on arrays is mostly caused by oligo fragments with a run of 10 to 16 nucleotides complementary to the probes. Furthermore, a free-energy based model was proposed to quantify the amount of cross-hybridization signal on each probe. This model treats cross-hybridization as an integral effect of the interactions between a probe and various off-target oligo fragments. Using public spike-in datasets, the model showed high accuracy in predicting the cross-hybridization signals on those probes whose intended targets are absent in the sample. ^ Several prospective models were proposed to improve Positional Dependent Nearest-Neighbor (PDNN) model for better quantification of gene expression and cross-hybridization. ^ The problem addressed in this dissertation is fundamental to the microarray technology. We expect that this study will help us to understand the detailed mechanism that determines sensitivity and specificity on the microarrays. Consequently, this research will have a wide impact on how microarrays are designed and how the data are interpreted. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Event-related potentials (ERPs) may be used as a highly sensitive way of detecting subtle degrees of cognitive dysfunction. On the other hand, impairment of cognitive skills is increasingly recognised as a hallmark of patients suffering from multiple sclerosis (MS). We sought to determine the psychophysiological pattern of information processing among MS patients with the relapsing-remitting form of the disease and low physical disability considered as two subtypes: 'typical relapsing-remitting' (RRMS) and 'benign MS' (BMS). Furthermore, we subjected our data to a cluster analysis to determine whether MS patients and healthy controls could be differentiated in terms of their psychophysiological profile.Methods: We investigated MS patients with RRMS and BMS subtypes using event-related potentials (ERPs) acquired in the context of a Posner visual-spatial cueing paradigm. Specifically, our study aimed to assess ERP brain activity in response preparation (contingent negative variation -CNV) and stimuli processing in MS patients. Latency and amplitude of different ERP components (P1, eN1, N1, P2, N2, P3 and late negativity -LN) as well as behavioural responses (reaction time -RT; correct responses -CRs; and number of errors) were analyzed and then subjected to cluster analysis. Results: Both MS groups showed delayed behavioural responses and enhanced latency for long-latency ERP components (P2, N2, P3) as well as relatively preserved ERP amplitude, but BMS patients obtained more important performance deficits (lower CRs and higher RTs) and abnormalities related to the latency (N1, P3) and amplitude of ERPs (eCNV, eN1, LN). However, RRMS patients also demonstrated abnormally high amplitudes related to the preparation performance period of CNV (cCNV) and post-processing phase (LN). Cluster analyses revealed that RRMS patients appear to make up a relatively homogeneous group with moderate deficits mainly related to ERP latencies, whereas BMS patients appear to make up a rather more heterogeneous group with more severe information processing and attentional deficits. Conclusions: Our findings are suggestive of a slowing of information processing for MS patients that may be a consequence of demyelination and axonal degeneration, which also seems to occur in MS patients that show little or no progression in the physical severity of the disease over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As we enter an era of ‘big data’, asset information is becoming a deliverable of complex projects. Prior research suggests digital technologies enable rapid, flexible forms of project organizing. This research analyses practices of managing change in Airbus, CERN and Crossrail, through desk-based review, interviews, visits and a cross-case workshop. These organizations deliver complex projects, rely on digital technologies to manage large data-sets; and use configuration management, a systems engineering approach with mid-20th century origins, to establish and maintain integrity. In them, configuration management has become more, rather than less, important. Asset information is structured, with change managed through digital systems, using relatively hierarchical, asynchronous and sequential processes. The paper contributes by uncovering limits to flexibility in complex projects where integrity is important. Challenges of managing change are discussed, considering the evolving nature of configuration management; potential use of analytics on complex projects; and implications for research and practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maine has the highest potential for wind energy in New England and falls within the top twenty states in the nation. It falls just behind Wisconsin and California with an estimate electrical output of 56 billion kWhs. The geological makeup of Maine’s mountains in the western part of the state, and the exposed coastline provide opportune areas to capture wind and convert it into energy. The information included in this poster will suggest the most likely areas for wind development based on a number of factors as recommended by the American Wind Energy Association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system of cluster analysis for genome-wide expression data from DNA microarray hybridization is described that uses standard statistical algorithms to arrange genes according to similarity in pattern of gene expression. The output is displayed graphically, conveying the clustering and the underlying expression data simultaneously in a form intuitive for biologists. We have found in the budding yeast Saccharomyces cerevisiae that clustering gene expression data groups together efficiently genes of known similar function, and we find a similar tendency in human data. Thus patterns seen in genome-wide expression experiments can be interpreted as indications of the status of cellular processes. Also, coexpression of genes of known function with poorly characterized or novel genes may provide a simple means of gaining leads to the functions of many genes for which information is not available currently.