854 resultados para High dimensional design
Resumo:
A novel image restoration approach based on high-dimensional space geometry is proposed, which is quite different from the existing traditional image restoration techniques. It is based on the homeomorphisms and "Principle of Homology Continuity" (PHC), an image is mapped to a point in high-dimensional space. Begin with the original blurred image, we get two further blurred images, then the restored image can be obtained through the regressive curve derived from the three points which are mapped form the images. Experiments have proved the availability of this "blurred-blurred-restored" algorithm, and the comparison with the classical Wiener Filter approach is presented in final.
Resumo:
The goal of image restoration is to restore the original clear image from the existing blurred image without distortion as possible. A novel approach based on point location in high-dimensional space geometry method is proposed, which is quite different from the thought ways of existing traditional image restoration approaches. It is based on the high-dimensional space geometry method, which derives from the fact of the Principle of Homology-Continuity (PHC). Begin with the original blurred image, we get two further blurred images. Through the regressive deducing curve fitted by these three images, the first iterative deblured image could be obtained. This iterative "blurring-debluring-blurring" process is performed till reach the deblured image. Experiments have proved the availability of the proposed approach and achieved not only common image restoration but also blind image restoration which represents the majority of real problems.
Resumo:
In this paper, a face detection algorithm which is based on high dimensional space geometry has been proposed. Then after the simulation experiment of Euclidean Distance and the introduced algorithm, it was theoretically analyzed and discussed that the proposed algorithm has apparently advantage over the Euclidean Distance. Furthermore, in our experiments in color images, the proposed algorithm even gives more surprises.
Resumo:
With a view to solve the problems in modern information science, we put forward a new subject named High-Dimensional Space Geometrical Informatics (HDSGI). It builds a bridge between information science and point distribution analysis in high-dimensional space. A good many experimental results certified the correctness and availability of the theory of HDSGI. The proposed method for image restoration is an instance of its application in signal processing. Using an iterative "further blurring-debluring-further blurring" algorithm, the deblured image could be obtained.
Resumo:
A novel image restoration approach based on high-dimensional space geometry is proposed, which is quite different from the existing traditional image restoration techniques. It is based on the homeomorphisms and "Principle of Homology Continuity" (PHC), an image is mapped to a point in high-dimensional space. Begin with the original blurred image, we get two further blurred images, then the restored image can be obtained through the regressive curve derived from the three points which are mapped form the images. Experiments have proved the availability of this "blurred-blurred-restored" algorithm, and the comparison with the classical Wiener Filter approach is presented in final.
Resumo:
A novel geometric algorithm for blind image restoration is proposed in this paper, based on High-Dimensional Space Geometrical Informatics (HDSGI) theory. In this algorithm every image is considered as a point, and the location relationship of the points in high-dimensional space, i.e. the intrinsic relationship of images is analyzed. Then geometric technique of "blurring-blurring-deblurring" is adopted to get the deblurring images. Comparing with other existing algorithms like Wiener filter, super resolution image restoration etc., the experimental results show that the proposed algorithm could not only obtain better details of images but also reduces the computational complexity with less computing time. The novel algorithm probably shows a new direction for blind image restoration with promising perspective of applications.
Resumo:
By introducing the flexible 1,1'-(1,4-butanediyl)bis(imidazole) (bbi) ligand into the polyoxovanadate system, five novel polyoxoanion-templated architectures based on [As8V14O42](4-) and [V16O38Cl](6-) building blocks were obtained: [M(bbi)(2)](2)[As8V14O42(H2O)] [M = Co (1), Ni (2), and Zn (3)], [Cu(bbi)](4)[As8V14O42(H2O)] (4), and [Cu(bbi)](6)[V16O38Cl] (5). Compounds 1-3 are isostructural, and they exhibit a binodal (4,6)-connected 2D structure with Schlafli symbol (3(4)center dot 4(2))(3(4)center dot 4(4)center dot 5(4)center dot 6(3))(2), in which the polyoxoanion induces a closed four-membered circuit of M-4(bbi)(4). Compound 4 exhibits an interesting 3D framework constructed from tetradentate [As8V14O42](4-) cluster anions and cationic ladderlike double chains. There exists a bigger M-8(bbi)(6)O-2 circuit in 4. The 3D extended structure of 5 is composed of heptadentate [V16O38Cl](6-) anions and flexural cationic chains; the latter consists of six Cu(bbi) segments arranged alternately. It presents the largest 24-membered circuit of M-24(bbi)(24) so far observed made of bbi molecules and transition-metal cations. Investigation of their structural relations shows the important template role of the polyoxoanions and the synergetic interactions among the polyoxoanions, transition-metal ions, and flexible ligand in the assembly process.
Resumo:
Three novel supramolecular assemblies constructed from polyoxometalate and crown ether building blocks, [(DB18C6)Na(H2O)(1.5)](2)Mo6O19.CH3CN, 1, and [{Na(DB18C6)(H2O)(2)}(3)(H2O)(2)]XMo12O40.6DMF.CH3CN (X = P, 2, and As, 3; DB18C6 = dibenzo-18-crown-6; DMF = N,N-dimethylfomamide), have been synthesized and characterized by elemental analyses, IR, UV-vis, EPR, TG, and single crystal X-ray diffraction. Compound 1 crystallizes in the tetragonal space group P4/mbm with a = 16.9701(6) Angstrom, c = 14.2676(4) Angstrom, and Z = 2. Compound 2 crystallizes in the hexagonal space group P6(3)/m with a = 15,7435(17) Angstrom, c = 30.042(7) Angstrom, gamma = 120degrees, and Z = 2. Compound 3 crystallizes in the hexagonal space group P6(3)/m with a = 15.6882(5) Angstrom, c = 29.9778(18) Angstrom, gamma = 120degrees, and Z = 2. Compound 1 exhibits an unusual three-dimensional network with one-dimensional sandglasslike channels based on the extensive weak forces between the oxygen atoms on the [Mo6O19](2-) polyoxoanions and the CH2 groups of crown ether molecules, Compounds 2 and 3 are isostructural, and both contain a novel semiopen cagelike trimeric cation [{Na(DB18C6)(H2O)(2)}(3)(H2O)(2)](3+). In their packing arrangement, an interesting 2-D "honeycomblike" "host" network is formed, in which the [XMo12O40](3-) (X = As and P) polyoxoanion "guests" resided.
Resumo:
The task in text retrieval is to find the subset of a collection of documents relevant to a user's information request, usually expressed as a set of words. Classically, documents and queries are represented as vectors of word counts. In its simplest form, relevance is defined to be the dot product between a document and a query vector--a measure of the number of common terms. A central difficulty in text retrieval is that the presence or absence of a word is not sufficient to determine relevance to a query. Linear dimensionality reduction has been proposed as a technique for extracting underlying structure from the document collection. In some domains (such as vision) dimensionality reduction reduces computational complexity. In text retrieval it is more often used to improve retrieval performance. We propose an alternative and novel technique that produces sparse representations constructed from sets of highly-related words. Documents and queries are represented by their distance to these sets. and relevance is measured by the number of common clusters. This technique significantly improves retrieval performance, is efficient to compute and shares properties with the optimal linear projection operator and the independent components of documents.
Resumo:
Chow and Liu introduced an algorithm for fitting a multivariate distribution with a tree (i.e. a density model that assumes that there are only pairwise dependencies between variables) and that the graph of these dependencies is a spanning tree. The original algorithm is quadratic in the dimesion of the domain, and linear in the number of data points that define the target distribution $P$. This paper shows that for sparse, discrete data, fitting a tree distribution can be done in time and memory that is jointly subquadratic in the number of variables and the size of the data set. The new algorithm, called the acCL algorithm, takes advantage of the sparsity of the data to accelerate the computation of pairwise marginals and the sorting of the resulting mutual informations, achieving speed ups of up to 2-3 orders of magnitude in the experiments.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
The goal of this work is to learn a parsimonious and informative representation for high-dimensional time series. Conceptually, this comprises two distinct yet tightly coupled tasks: learning a low-dimensional manifold and modeling the dynamical process. These two tasks have a complementary relationship as the temporal constraints provide valuable neighborhood information for dimensionality reduction and conversely, the low-dimensional space allows dynamics to be learnt efficiently. Solving these two tasks simultaneously allows important information to be exchanged mutually. If nonlinear models are required to capture the rich complexity of time series, then the learning problem becomes harder as the nonlinearities in both tasks are coupled. The proposed solution approximates the nonlinear manifold and dynamics using piecewise linear models. The interactions among the linear models are captured in a graphical model. The model structure setup and parameter learning are done using a variational Bayesian approach, which enables automatic Bayesian model structure selection, hence solving the problem of over-fitting. By exploiting the model structure, efficient inference and learning algorithms are obtained without oversimplifying the model of the underlying dynamical process. Evaluation of the proposed framework with competing approaches is conducted in three sets of experiments: dimensionality reduction and reconstruction using synthetic time series, video synthesis using a dynamic texture database, and human motion synthesis, classification and tracking on a benchmark data set. In all experiments, the proposed approach provides superior performance.
Resumo:
We consider the problem of variable selection in regression modeling in high-dimensional spaces where there is known structure among the covariates. This is an unconventional variable selection problem for two reasons: (1) The dimension of the covariate space is comparable, and often much larger, than the number of subjects in the study, and (2) the covariate space is highly structured, and in some cases it is desirable to incorporate this structural information in to the model building process. We approach this problem through the Bayesian variable selection framework, where we assume that the covariates lie on an undirected graph and formulate an Ising prior on the model space for incorporating structural information. Certain computational and statistical problems arise that are unique to such high-dimensional, structured settings, the most interesting being the phenomenon of phase transitions. We propose theoretical and computational schemes to mitigate these problems. We illustrate our methods on two different graph structures: the linear chain and the regular graph of degree k. Finally, we use our methods to study a specific application in genomics: the modeling of transcription factor binding sites in DNA sequences. © 2010 American Statistical Association.
Resumo:
MOTIVATION: Technological advances that allow routine identification of high-dimensional risk factors have led to high demand for statistical techniques that enable full utilization of these rich sources of information for genetics studies. Variable selection for censored outcome data as well as control of false discoveries (i.e. inclusion of irrelevant variables) in the presence of high-dimensional predictors present serious challenges. This article develops a computationally feasible method based on boosting and stability selection. Specifically, we modified the component-wise gradient boosting to improve the computational feasibility and introduced random permutation in stability selection for controlling false discoveries. RESULTS: We have proposed a high-dimensional variable selection method by incorporating stability selection to control false discovery. Comparisons between the proposed method and the commonly used univariate and Lasso approaches for variable selection reveal that the proposed method yields fewer false discoveries. The proposed method is applied to study the associations of 2339 common single-nucleotide polymorphisms (SNPs) with overall survival among cutaneous melanoma (CM) patients. The results have confirmed that BRCA2 pathway SNPs are likely to be associated with overall survival, as reported by previous literature. Moreover, we have identified several new Fanconi anemia (FA) pathway SNPs that are likely to modulate survival of CM patients. AVAILABILITY AND IMPLEMENTATION: The related source code and documents are freely available at https://sites.google.com/site/bestumich/issues. CONTACT: yili@umich.edu.