919 resultados para High-dimensional


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis presenta los resultados de la investigación realizada sobre la inertización de cenizas volantes procedentes de residuos sólidos urbanos y su posterior encapsulación en distintas matrices de mortero. Durante el proceso de inertización, se ha logrado la inertización de éste residuo tóxico y peligroso (RTP) y también su valorización como subproducto. De esta forma se dispone de nueva “materia prima” a bajo coste y la eliminación de un residuo tóxico y peligroso con la consiguiente conservación de recursos naturales alternativos. La caracterización química de las cenizas analizadas refleja que éstas presentan altas concentraciones de cloruros, Zn y Pb. Durante la investigación se ha desarrollado un proceso de inertización de las cenizas volantes con bicarbonato sódico (NaHCO3) que reduce en un 99% el contenido en cloruros y mantiene el pH en valores óptimos para que la concentración de los metales pesados en el lixiviado sea mínima debido a su estabilización en forma de carbonatos insolubles. Se han elaborado morteros con cuatro tipos distintos de cementos (CEM-I, CEM-II, CAC y CSA) incorporando cenizas volantes inertizadas en una proporción igual a un 10% en peso del árido utilizado. Los morteros ensayados abarcan distintas dosificaciones tanto en la utilización de áridos con distintos diámetros (0/2 y 0/4), como en la relación cemento/árido (1/1 y 1/3). Se han obtenido las propiedades físicas y mecánicas de estos morteros mediante ensayos de Trabajabilidad, Estabilidad Dimensional, Carbonatación, Porosidad y Resistencias Mecánicas. De igual forma, se presentan resultados de ensayos de lixiviación de Zn, Pb, Cu y Cd, sobre probetas monolíticas de los morteros con los mejores comportamientos físico/mecánicos, donde se ha analizado el contenido en iones de dichos metales pesados lixiviados mediante determinación voltamperométrica de redisolución anódica Se concluye que todos los morteros ensayados son técnicamente aceptables, siendo los más favorables los elaborados con Cemento de Sulfoaluminato de Calcio (CSA) y con Cemento de Aluminato de Calcio (CAC). En este último caso, se mejoran las resistencias a compresión de los morteros de referencia en más de un 48%, y las resistencias a flexión en más de un 67%. De igual forma, los ensayos de lixiviado revelan la completa encapsulación de los iones de Zn y la mitigación en el lixiviado de los iones de Pb. Ambos morteros podrían ser perfectamente validos en actuaciones en las que se necesitase un producto de fraguado rápido, altas resistencias iniciales y compensación de las retracciones con una elevada estabilidad dimensional. En base a esto, el material podría ser utilizado como mortero de reparación en viales y pavimentos que requiriesen altas prestaciones, tales como: soleras industriales, pistas de aterrizaje, aparcamientos, etc. O bien, para la confección de elementos prefabricados sin armaduras estructurales, dada su elevada resistencia a flexión. ABSTRACT This dissertation presents the results of a research on inerting fly ash from urban solid waste and its subsequent encapsulation in mortar matrixes. The inerting of this hazardous toxic waste, as well as its valorization as a by-product has been achieved. In this way, a new "raw material" is available through a simple process and the toxic and hazardous waste is eliminated, and consequently, conservation of alternative natural resources is strengthened. Chemical analysis of the ashes analyzed shows high concentrations of soluble chlorides, Zn and Pb. An inerting process of fly ash with sodium bicarbonate (NaHCO3) has been developed which reduces 99% the content of chlorides and maintains pH at optimal values, so that the concentration of heavy metals in the leachate is minimum, due to its stabilization in the form of insoluble carbonates. Mortars with four different types of cements (CEM-I, CEM-II, CAC and CSA) have been developed by the addition of inertized fly ash in the form of carbonates, in the proportion of 10% in weight of the aggregates used. The samples tested include different proportions in the use of aggregates with different sizes (0/2 and 0/4), and in the cement/aggregate ratio (1/1 and 1/3). Physical/mechanical properties of these mortars have been studied through workability, dimensional stability, carbonation, porosity and mechanic strength tests. Leaching tests of Zn, Pb, Cu and Cd ions are also being performed on monolithic samples of the best behavioral mortars. The content in leachated heavy metal ions is being analyzed through stripping voltammetry determination. Conclusions drawn are that the tested CAC and CSA cement mortars present much better behavior than those of CEM-I and CEM-II cement. The results are especially remarkable for the CAC cement mortars, improving reference mortars compression strengths in more than 48%, and also bending strengths in more than 67%. Leaching tests confirm that the encapsulation of Zn and Pb is achieved and leachate of both ions is mitigated within the mortar matrixes. For the above stated reasons, it might be concluded that mortars made with calcium aluminate cements or calcium sulfoaluminate with the incorporation of treated fly ash, may be perfectly valid for uses in which a fast-curing product, with high initial strength and drying shrinkage compensation with a high dimensional stability is required. Based on this, the material could be used as repair mortar for structures, roads and industrial pavements requiring high performance, such as: industrial floorings, landing tracks, parking lots, etc. Alternatively, it could also be used in the manufacture of prefabricated elements without structural reinforcement, given its high bending strength.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traffic flow time series data are usually high dimensional and very complex. Also they are sometimes imprecise and distorted due to data collection sensor malfunction. Additionally, events like congestion caused by traffic accidents add more uncertainty to real-time traffic conditions, making traffic flow forecasting a complicated task. This article presents a new data preprocessing method targeting multidimensional time series with a very high number of dimensions and shows its application to real traffic flow time series from the California Department of Transportation (PEMS web site). The proposed method consists of three main steps. First, based on a language for defining events in multidimensional time series, mTESL, we identify a number of types of events in time series that corresponding to either incorrect data or data with interference. Second, each event type is restored utilizing an original method that combines real observations, local forecasted values and historical data. Third, an exponential smoothing procedure is applied globally to eliminate noise interference and other random errors so as to provide good quality source data for future work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To date, although much attention has been paid to the estimation and modeling of the voice source (ie, the glottal airflow volume velocity), the measurement and characterization of the supraglottal pressure wave have been much less studied. Some previous results have unveiled that the supraglottal pressure wave has some spectral resonances similar to those of the voice pressure wave. This makes the supraglottal wave partially intelligible. Although the explanation for such effect seems to be clearly related to the reflected pressure wave traveling upstream along the vocal tract, the influence that nonlinear source-filter interaction has on it is not as clear. This article provides an insight into this issue by comparing the acoustic analyses of measured and simulated supraglottal and voice waves. Simulations have been performed using a high-dimensional discrete vocal fold model. Results of such comparative analysis indicate that spectral resonances in the supraglottal wave are mainly caused by the regressive pressure wave that travels upstream along the vocal tract and not by source-tract interaction. On the contrary and according to simulation results, source-tract interaction has a role in the loss of intelligibility that happens in the supraglottal wave with respect to the voice wave. This loss of intelligibility mainly corresponds to spectral differences for frequencies above 1500 Hz.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The mapping of high-dimensional olfactory stimuli onto the two-dimensional surface of the nasal sensory epithelium constitutes the first step in the neuronal encoding of olfactory input. We have used zebrafish as a model system to analyze the spatial distribution of odorant receptor molecules in the olfactory epithelium by quantitative in situ hybridization. To this end, we have cloned 10 very divergent zebrafish odorant receptor molecules by PCR. Individual genes are expressed in sparse olfactory receptor neurons. Analysis of the position of labeled cells in a simplified coordinate system revealed three concentric, albeit overlapping, expression domains for the four odorant receptors analyzed in detail. Such regionalized expression should result in a corresponding segregation of functional response properties. This might represent the first step of spatial encoding of olfactory input or be essential for the development of the olfactory system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Self-organising neural models have the ability to provide a good representation of the input space. In particular the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time-consuming, especially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This paper proposes a Graphics Processing Unit (GPU) parallel implementation of the GNG with Compute Unified Device Architecture (CUDA). In contrast to existing algorithms, the proposed GPU implementation allows the acceleration of the learning process keeping a good quality of representation. Comparative experiments using iterative, parallel and hybrid implementations are carried out to demonstrate the effectiveness of CUDA implementation. The results show that GNG learning with the proposed implementation achieves a speed-up of 6× compared with the single-threaded CPU implementation. GPU implementation has also been applied to a real application with time constraints: acceleration of 3D scene reconstruction for egomotion, in order to validate the proposal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The FANOVA (or “Sobol’-Hoeffding”) decomposition of multivariate functions has been used for high-dimensional model representation and global sensitivity analysis. When the objective function f has no simple analytic form and is costly to evaluate, computing FANOVA terms may be unaffordable due to numerical integration costs. Several approximate approaches relying on Gaussian random field (GRF) models have been proposed to alleviate these costs, where f is substituted by a (kriging) predictor or by conditional simulations. Here we focus on FANOVA decompositions of GRF sample paths, and we notably introduce an associated kernel decomposition into 4 d 4d terms called KANOVA. An interpretation in terms of tensor product projections is obtained, and it is shown that projected kernels control both the sparsity of GRF sample paths and the dependence structure between FANOVA effects. Applications on simulated data show the relevance of the approach for designing new classes of covariance kernels dedicated to high-dimensional kriging.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A manufacturing technique for the production of aluminum components is described. A resin-bonded part is formed by a rapid prototyping technique and then debound and infiltrated by a second aluminum alloy under a nitrogen atmosphere. During thermal processing, the aluminum reacts with the nitrogen and is partially transformed into a rigid aluminum nitride skeleton, which provides the structural rigidity during infiltration. The simplicity and rapidity of this process in comparison to conventional production routes, combined with the ability to fabricate complicated parts of almost any geometry and with high dimensional precision, provide an additional means to manufacture aluminum components.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the rapid increase in both centralized video archives and distributed WWW video resources, content-based video retrieval is gaining its importance. To support such applications efficiently, content-based video indexing must be addressed. Typically, each video is represented by a sequence of frames. Due to the high dimensionality of frame representation and the large number of frames, video indexing introduces an additional degree of complexity. In this paper, we address the problem of content-based video indexing and propose an efficient solution, called the Ordered VA-File (OVA-File) based on the VA-file. OVA-File is a hierarchical structure and has two novel features: 1) partitioning the whole file into slices such that only a small number of slices are accessed and checked during k Nearest Neighbor (kNN) search and 2) efficient handling of insertions of new vectors into the OVA-File, such that the average distance between the new vectors and those approximations near that position is minimized. To facilitate a search, we present an efficient approximate kNN algorithm named Ordered VA-LOW (OVA-LOW) based on the proposed OVA-File. OVA-LOW first chooses possible OVA-Slices by ranking the distances between their corresponding centers and the query vector, and then visits all approximations in the selected OVA-Slices to work out approximate kNN. The number of possible OVA-Slices is controlled by a user-defined parameter delta. By adjusting delta, OVA-LOW provides a trade-off between the query cost and the result quality. Query by video clip consisting of multiple frames is also discussed. Extensive experimental studies using real video data sets were conducted and the results showed that our methods can yield a significant speed-up over an existing VA-file-based method and iDistance with high query result quality. Furthermore, by incorporating temporal correlation of video content, our methods achieved much more efficient performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index image's multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partition's center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images have similar similarity values from the center of the same local partition in any feature space, certain number of irrelevant images can be fast pruned based on the triangle inequity on indexing keys. To remove the dimensionality curse existing in high dimensional structure, we propose a new technique called Local Bit Stream (LBS). LBS transforms image's text and visual feature representations into simple, uniform and effective bit stream (BS) representations based on local partition's center. Such BS representations are small in size and fast for comparison since only bit operation are involved. By comparing common bits existing in two BSs, most of irrelevant images can be immediately filtered. To effectively integrate multi-features, we also investigated the following evidence combination techniques-Certainty Factor, Dempster Shafer Theory, Compound Probability, and Linear Combination. Our extensive experiment showed that single one-dimensional index on multi-features improves multi-indices on multi-features greatly. Our LBS method outperforms sequential scan on high dimensional space by an order of magnitude. And Certainty Factor and Dempster Shafer Theory perform best in combining multiple similarities from corresponding multiple features.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In many advanced applications, data are described by multiple high-dimensional features. Moreover, different queries may weight these features differently; some may not even specify all the features. In this paper, we propose our solution to support efficient query processing in these applications. We devise a novel representation that compactly captures f features into two components: The first component is a 2D vector that reflects a distance range ( minimum and maximum values) of the f features with respect to a reference point ( the center of the space) in a metric space and the second component is a bit signature, with two bits per dimension, obtained by analyzing each feature's descending energy histogram. This representation enables two levels of filtering: The first component prunes away points that do not share similar distance ranges, while the bit signature filters away points based on the dimensions of the relevant features. Moreover, the representation facilitates the use of a single index structure to further speed up processing. We employ the classical B+-tree for this purpose. We also propose a KNN search algorithm that exploits the access orders of critical dimensions of highly selective features and partial distances to prune the search space more effectively. Our extensive experiments on both real-life and synthetic data sets show that the proposed solution offers significant performance advantages over sequential scan and retrieval methods using single and multiple VA-files.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we present a novel indexing technique called Multi-scale Similarity Indexing (MSI) to index imagersquos multi-features into a single one-dimensional structure. Both for text and visual feature spaces, the similarity between a point and a local partitionrsquos center in individual space is used as the indexing key, where similarity values in different features are distinguished by different scale. Then a single indexing tree can be built on these keys. Based on the property that relevant images haves similar similarity values from the center of the same local partition in any feature space, certain number of irrelevant images can be fast pruned based on the triangle inequity on indexing keys. To remove the ldquodimensionality curserdquo existing in high dimensional structure, we propose a new technique called Local Bit Stream (LBS). LBS transforms imagersquos text and visual feature representations into simple, uniform and effective bit stream (BS) representations based on local partitionrsquos center. Such BS representations are small in size and fast for comparison since only bit operation are involved. By comparing common bits existing in two BSs, most of irrelevant images can be immediately filtered. Our extensive experiment showed that single one-dimensional index on multi-features improves multi-indices on multi-features greatly. Our LBS method outperforms sequential scan on high dimensional space by an order of magnitude.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is a study of the generation of topographic mappings - dimension reducing transformations of data that preserve some element of geometric structure - with feed-forward neural networks. As an alternative to established methods, a transformational variant of Sammon's method is proposed, where the projection is effected by a radial basis function neural network. This approach is related to the statistical field of multidimensional scaling, and from that the concept of a 'subjective metric' is defined, which permits the exploitation of additional prior knowledge concerning the data in the mapping process. This then enables the generation of more appropriate feature spaces for the purposes of enhanced visualisation or subsequent classification. A comparison with established methods for feature extraction is given for data taken from the 1992 Research Assessment Exercise for higher educational institutions in the United Kingdom. This is a difficult high-dimensional dataset, and illustrates well the benefit of the new topographic technique. A generalisation of the proposed model is considered for implementation of the classical multidimensional scaling (¸mds}) routine. This is related to Oja's principal subspace neural network, whose learning rule is shown to descend the error surface of the proposed ¸mds model. Some of the technical issues concerning the design and training of topographic neural networks are investigated. It is shown that neural network models can be less sensitive to entrapment in the sub-optimal global minima that badly affect the standard Sammon algorithm, and tend to exhibit good generalisation as a result of implicit weight decay in the training process. It is further argued that for ideal structure retention, the network transformation should be perfectly smooth for all inter-data directions in input space. Finally, there is a critique of optimisation techniques for topographic mappings, and a new training algorithm is proposed. A convergence proof is given, and the method is shown to produce lower-error mappings more rapidly than previous algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visualization has proven to be a powerful and widely-applicable tool the analysis and interpretation of data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach first on a toy data set, and then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines and to data in 36 dimensions derived from satellite images.