891 resultados para High-Dimensional Space Geometrical Informatics (HDSGI)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematical morphology addresses the problem of describing shapes in an n-dimensional space using the concepts of set theory. A series of standardized morphological operations are defined, and they are applied to the shapes to transform them using another shape called the structuring element. In an industrial environment, the process of manufacturing a piece is based on the manipulation of a primitive object via contact with a tool that transforms the object progressively to obtain the desired design. The analogy with the morphological operation of erosion is obvious. Nevertheless, few references about the relation between the morphological operations and the process of design and manufacturing can be found. The non-deterministic nature of classic mathematical morphology makes it very difficult to adapt their basic operations to the dynamics of concepts such as the ordered trajectory. A new geometric model is presented, inspired by the classic morphological paradigm, which can define objects and apply morphological operations that transform these objects. The model specializes in classic morphological operations, providing them with the determinism inherent in dynamic processes that require an order of application, as is the case for designing and manufacturing objects in professional computer-aided design and manufacturing (CAD/CAM) environments. The operators are boundary-based so that only the points in the frontier are handled. As a consequence, the process is more efficient and more suitable for use in CAD/CAM systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The FANOVA (or “Sobol’-Hoeffding”) decomposition of multivariate functions has been used for high-dimensional model representation and global sensitivity analysis. When the objective function f has no simple analytic form and is costly to evaluate, computing FANOVA terms may be unaffordable due to numerical integration costs. Several approximate approaches relying on Gaussian random field (GRF) models have been proposed to alleviate these costs, where f is substituted by a (kriging) predictor or by conditional simulations. Here we focus on FANOVA decompositions of GRF sample paths, and we notably introduce an associated kernel decomposition into 4 d 4d terms called KANOVA. An interpretation in terms of tensor product projections is obtained, and it is shown that projected kernels control both the sparsity of GRF sample paths and the dependence structure between FANOVA effects. Applications on simulated data show the relevance of the approach for designing new classes of covariance kernels dedicated to high-dimensional kriging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A manufacturing technique for the production of aluminum components is described. A resin-bonded part is formed by a rapid prototyping technique and then debound and infiltrated by a second aluminum alloy under a nitrogen atmosphere. During thermal processing, the aluminum reacts with the nitrogen and is partially transformed into a rigid aluminum nitride skeleton, which provides the structural rigidity during infiltration. The simplicity and rapidity of this process in comparison to conventional production routes, combined with the ability to fabricate complicated parts of almost any geometry and with high dimensional precision, provide an additional means to manufacture aluminum components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A combination of uni- and multiplex PCR assays targeting 58 virulence genes (VGs) associated with Escherichia coli strains causing intestinal and extraintestinal disease in humans and other mammals was used to analyze the VG repertoire of 23 commensal E. coli isolates from healthy pigs and 52 clinical isolates associated with porcine neonatal diarrhea (ND) and postweaning diarrhea (PWD). The relationship between the presence and absence of VGs was interrogated using three statistical methods. According to the generalized linear model, 17 of 58 VGs were found to be significant (P < 0.05) in distinguishing between commensal and clinical isolates. Nine of the 17 genes represented by iha, hlyA, aidA, east1, aah, fimH, iroN(E).(coli), traT, and saa have not been previously identified as important VGs in clinical porcine isolates in Australia. The remaining eight VGs code for fimbriae (F4, F5, F18, and F41) and toxins (STa, STh, LT, and Stx2), normally associated with porcine enterotoxigenic E. coli. Agglomerative hierarchical algorithm analysis grouped E. coli strains into subclusters based primarily on their serogroup. Multivariate analyses of clonal relationships based on the 17 VGs were collapsed into two-dimensional space by principal coordinate analysis. PWD clones were distributed in two quadrants, separated from ND and commensal clones, which tended to cluster within one quadrant. Clonal subclusters within quadrants were highly correlated with serogroups. These methods of analysis provide different perspectives in our attempts to understand how commensal and clinical porcine enterotoxigenic E. coli strains have evolved and are engaged in the dynamic process of losing or acquiring VGs within the pig population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generative topographic mapping (GTM) model was introduced by Bishop et al. (1998, Neural Comput. 10(1), 215-234) as a probabilistic re- formulation of the self-organizing map (SOM). It offers a number of advantages compared with the standard SOM, and has already been used in a variety of applications. In this paper we report on several extensions of the GTM, including an incremental version of the EM algorithm for estimating the model parameters, the use of local subspace models, extensions to mixed discrete and continuous data, semi-linear models which permit the use of high-dimensional manifolds whilst avoiding computational intractability, Bayesian inference applied to hyper-parameters, and an alternative framework for the GTM based on Gaussian processes. All of these developments directly exploit the probabilistic structure of the GTM, thereby allowing the underlying modelling assumptions to be made explicit. They also highlight the advantages of adopting a consistent probabilistic framework for the formulation of pattern recognition algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using techniques from Statistical Physics, the annealed VC entropy for hyperplanes in high dimensional spaces is calculated as a function of the margin for a spherical Gaussian distribution of inputs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of illusory or artefactual structure from the visualisation of high-dimensional structureless data. In particular we examine the role of the distance metric in the use of topographic mappings based on the statistical field of multidimensional scaling. We show that the use of a squared Euclidean metric (i.e. the SSTRESs measure) gives rise to an annular structure when the input data is drawn from a high-dimensional isotropic distribution, and we provide a theoretical justification for this observation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE. It is well documented that myopia is associated with an increase in axial length or, more specifically, in vitreous chamber depth. Whether the transverse dimensions of the eye also increase in myopia is relevant to further understanding of its development. METHODS. The posterior retinal surface was localized in two-dimensional space in both eyes of young adult white and Taiwanese-Chinese iso- and anisomyopes (N = 56), from measured keratometry, A-scan ultrasonography, and central and peripheral refraction (±35°) data, with the aid of a computer modeling program designed for this purpose. Anisomyopes had 2 D or more interocular difference in their refractive errors, with mean values in their more myopic eyes of -5.57 D and in their less myopic eyes of -3.25 D, similar to the means of the two isomyopic groups. The derived retinal contours for the more and less myopic eyes were compared by way of investigating ocular shape changes that accompany myopia, in the posterior region of the vitreous chamber. The presence and size of optic disc crescents were also investigated as an index of retinal stretching in myopia. RESULTS. Relative to the less myopic eyes of anisometropic subjects, the more myopic eyes were more elongated and also distorted into a more prolate shape in both the white and Chinese groups. However, the Chinese eyes showed a greater and more uniform relative expansion of the posterior retinal surface in their more myopic eyes, and this was associated with larger optic disc crescents. The changes in the eyes of whites displayed a nasal-temporal axial asymmetry, reflecting greater enlargement of the nasal retinal sector. CONCLUSIONS. Myopia is associated with increased axial length and a prolate shape. This prolate shape is consistent with the proposed idea that axial and transverse dimensions of the eye are regulated differently. The observations that ocular shape changes are larger but more symmetrical in Chinese eyes than in eyes of whites warrant further investigation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. Most existing systems concentrate either on mining algorithms or on visualization techniques. Though visual methods developed in information visualization have been helpful, for improved understanding of a complex large high-dimensional dataset, there is a need for an effective projection of such a dataset onto a lower-dimension (2D or 3D) manifold. This paper introduces a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualization domain. The framework follows Shneiderman’s mantra to provide an effective user interface. The advantage of such an interface is that the user is directly involved in the data mining process. We integrate principled projection methods, such as Generative Topographic Mapping (GTM) and Hierarchical GTM (HGTM), with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, billboarding, and user interaction facilities, to provide an integrated visual data mining framework. Results on a real life high-dimensional dataset from the chemoinformatics domain are also reported and discussed. Projection results of GTM are analytically compared with the projection results from other traditional projection methods, and it is also shown that the HGTM algorithm provides additional value for large datasets. The computational complexity of these algorithms is discussed to demonstrate their suitability for the visual data mining framework.