920 resultados para data representation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sparse representation based classification (SRC) is one of the most successful methods that has been developed in recent times for face recognition. Optimal projection for Sparse representation based classification (OPSRC)1] provides a dimensionality reduction map that is supposed to give optimum performance for SRC framework. However, the computational complexity involved in this method is too high. Here, we propose a new projection technique using the data scatter matrix which is computationally superior to the optimal projection method with comparable classification accuracy with respect OPSRC. The performance of the proposed approach is benchmarked with various publicly available face database.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective in this work is to develop downscaling methodologies to obtain a long time record of inundation extent at high spatial resolution based on the existing low spatial resolution results of the Global Inundation Extent from Multi-Satellites (GIEMS) dataset. In semiarid regions, high-spatial-resolution a priori information can be provided by visible and infrared observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). The study concentrates on the Inner Niger Delta where MODIS-derived inundation extent has been estimated at a 500-m resolution. The space-time variability is first analyzed using a principal component analysis (PCA). This is particularly effective to understand the inundation variability, interpolate in time, or fill in missing values. Two innovative methods are developed (linear regression and matrix inversion) both based on the PCA representation. These GIEMS downscaling techniques have been calibrated using the 500-m MODIS data. The downscaled fields show the expected space-time behaviors from MODIS. A 20-yr dataset of the inundation extent at 500 m is derived from this analysis for the Inner Niger Delta. The methods are very general and may be applied to many basins and to other variables than inundation, provided enough a priori high-spatial-resolution information is available. The derived high-spatial-resolution dataset will be used in the framework of the Surface Water Ocean Topography (SWOT) mission to develop and test the instrument simulator as well as to select the calibration validation sites (with high space-time inundation variability). In addition, once SWOT observations are available, the downscaled methodology will be calibrated on them in order to downscale the GIEMS datasets and to extend the SWOT benefits back in time to 1993.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Fast and correct analysis of such information is important in for instance geospatial and social visualization applications. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a dataset to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap we report on a between-subjects experiment comparing novice users error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the dataset, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users when analyzing complex spatiotemporal patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contributed to: Fusion of Cultures: XXXVIII Annual Conference on Computer Applications and Quantitative Methods in Archaeology – CAA2010 (Granada, Spain, Apr 6-9, 2010)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.

In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:

  • For a given number of measurements, can we reliably estimate the true signal?
  • If so, how good is the reconstruction as a function of the model parameters?

More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electron diffraction investigation of the following compounds has been carried out: sulfur, sulfur nitride, realgar, arsenic trisulfide, spiropentane, dimethyltrisulfide, cis and trans lewisite, methylal, and ethylene glycol.

The crystal structures of the following salts have been determined by x-ray diffraction: silver molybdateand hydrazinium dichloride.

Suggested revisions of the covalent radii for B, Si, P, Ge, As, Sn, Sb, and Pb have been made, and values for the covalent radii of Al, Ga, In, Ti, and Bi have been proposed.

The Schomaker-Stevenson revision of the additivity rule for single covalent bond distances has been used in conjunction with the revised radii. Agreement with experiment is in general better with the revised radii than with the former radii and additivity.

The principle of ionic bond character in addition to that present in a normal covalent bond has been applied to the observed structures of numerous molecules. It leads to a method of interpretation which is at least as consistent as the theory of multiple bond formation.

The revision of the additivity rule has been extended to double bonds. An encouraging beginning along these lines has been made, but additional experimental data are needed for clarification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Southern bluefin tuna (SBT) (Thunnus maccoyii) growth rates are estimated from tag-return data associated with two time periods, the 1960s and 1980s. The traditional von Bertalanffy growth model (VBG) and a two-phase VBG model were fitted to the data by maximum likelihood. The traditional VBG model did not provide an adequate representation of growth in SBT, and the two-phase VBG yielded a significantly better fit. The results indicated that significant change occurs in the pattern of growth in relation to a VBG curve during the juvenile stages of the SBT life cycle, which may be related to the transition from a tightly schooling fish that spends substantial time in near and surface shore waters to one that is found primarily in more offshore and deeper waters. The results suggest that more complex growth models should be considered for other tunas and for other species that show a marked change in habitat use with age. The likelihood surface for the two-phase VBG model was found to be bimodal and some implications of this are investigated. Significant and substantial differences were found in the growth for fish spawned in the 1960s and in the 1980s, such that after age four there is a difference of about one year in the expected age of a fish of similar length which persists over the size range for which meaningful recapture data are available. This difference may be a density-dependent response as a consequence of the marked reduction in the SBT population. Given the key role that estimates of growth have in most stock assessments, the results indicate that there is a need both for the regular monitoring of growth rates and for provisions for changes in growth over time (possibly related to changes in abundance) in the stock assessment models used for SBT and other species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compared with structured data sources that are usually stored and analyzed in spreadsheets, relational databases, and single data tables, unstructured construction data sources such as text documents, site images, web pages, and project schedules have been less intensively studied due to additional challenges in data preparation, representation, and analysis. In this paper, our vision for data management and mining addressing such challenges are presented, together with related research results from previous work, as well as our recent developments of data mining on text-based, web-based, image-based, and network-based construction databases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compared with construction data sources that are usually stored and analyzed in spreadsheets and single data tables, data sources with more complicated structures, such as text documents, site images, web pages, and project schedules have been less intensively studied due to additional challenges in data preparation, representation, and analysis. In this paper, our definition and vision for advanced data analysis addressing such challenges are presented, together with related research results from previous work, as well as our recent developments of data analysis on text-based, image-based, web-based, and network-based construction sources. It is shown in this paper that particular data preparation, representation, and analysis operations should be identified, and integrated with careful problem investigations and scientific validation measures in order to provide general frameworks in support of information search and knowledge discovery from such information-abundant data sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subspace learning is the process of finding a proper feature subspace and then projecting high-dimensional data onto the learned low-dimensional subspace. The projection operation requires many floating-point multiplications and additions, which makes the projection process computationally expensive. To tackle this problem, this paper proposes two simple-but-effective fast subspace learning and image projection methods, fast Haar transform (FHT) based principal component analysis and FHT based spectral regression discriminant analysis. The advantages of these two methods result from employing both the FHT for subspace learning and the integral vector for feature extraction. Experimental results on three face databases demonstrated their effectiveness and efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

首先给出了一种通过融合多个超声波传感器和一台激光全局定位系统的数据建立机器人环境地图的方法 ,并在此基础上 ,首次提出了机器人在非结构环境下识别障碍物的一种新方法 ,即基于障碍物群的方法 .该方法的最大特点在于它可以更加简洁、有效地提取和描述机器人的环境特征 ,这对于较好地实现机器人的导航、避障 ,提高系统的自主性和实时性是至关重要的 .大量的实验结果表明了该方法的有效性 .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interpretation and recognition of noisy contours, such as silhouettes, have proven to be difficult. One obstacle to the solution of these problems has been the lack of a robust representation for contours. The contour is represented by a set of pairwise tangent circular arcs. The advantage of such an approach is that mathematical properties such as orientation and curvature are explicityly represented. We introduce a smoothing criterion for the contour tht optimizes the tradeoff between the complexity of the contour and proximity of the data points. The complexity measure is the number of extrema of curvature present in the contour. The smoothing criterion leads us to a true scale-space for contours. We describe the computation of the contour representation as well as the computation of relevant properties of the contour. We consider the potential application of the representation, the smoothing paradigm, and the scale-space to contour interpretation and recognition.