984 resultados para Dimensional reduction
Resumo:
A novel non-linear dimensionality reduction method, called Temporal Laplacian Eigenmaps, is introduced to process efficiently time series data. In this embedded-based approach, temporal information is intrinsic to the objective function, which produces description of low dimensional spaces with time coherence between data points. Since the proposed scheme also includes bidirectional mapping between data and embedded spaces and automatic tuning of key parameters, it offers the same benefits as mapping-based approaches. Experiments on a couple of computer vision applications demonstrate the superiority of the new approach to other dimensionality reduction method in term of accuracy. Moreover, its lower computational cost and generalisation abilities suggest it is scalable to larger datasets. © 2010 IEEE.
Resumo:
The surface modification of a mechanochemically prepared Ag/Al O catalyst compared with catalysts prepared by standard wet impregnated methods has been probed using two-dimensional T -T NMR correlations, HO temperature programmed desorption (TPD) and DRIFTS. The catalysts were examined for the selective catalytic reduction of NO using n-octane in the presence and absence of H. Higher activities were observed for the ball milled catalysts irrespective of whether H was added. This higher activity is thought to be related to the increased affinity of the catalyst surface towards the hydrocarbon relative to water, following mechanochemical preparation, resulting in higher concentrations of the hydrocarbon and lower concentrations of water at the surface. DRIFTS experiments demonstrated that surface isocyanate was formed significantly quicker and had a higher surface concentration in the case of the ball milled catalyst which has been correlated with the stronger interaction of the n-octane with the surface. This increased interaction may also be the cause of the reduced activation barrier measured for this catalyst compared with the wet impregnated system. The decreased interaction of water with the surface on ball milling is thought to reduce the effect of site blocking whilst still providing a sufficiently high surface concentration of water to enable effective hydrolysis of the isocyanate to form ammonia and, thereafter, N. This journal is © The Royal Society of Chemistry.
Resumo:
Objective:
The aim of this study was to identify sources of anatomical misrepresentation due to the location of camera mounting, tumour motion velocity and image processing artefacts in order to optimise the 4DCT scan protocol and improve geometrical-temporal accuracy.
Methods:A phantom with an imaging insert was driven with a sinusoidal superior-inferior motion of varying amplitude and period for 4DCT scanning. The length of a high density cube within the insert was measured using treatment planning software to determine the accuracy of its spatial representation. Scan parameters were varied including the tube rotation period and the cine time between reconstructed images. A CT image quality phantom was used to measure various image quality signatures under the scan parameters tested.
Results:No significant difference in spatial accuracy was found for 4DCT scans carried out using the wall mounted or couch mounted camera for sinusoidal target motion. Greater spatial accuracy was found for 4DCT scans carried out using a tube rotation speed of 0.5s rather than 1.0s. The reduction in image quality when using a faster rotation speed was not enough to require an increase in patient dose.
Conclusions:4DCT accuracy may be increased by optimising scan parameters, including choosing faster tube rotation speeds. Peak misidentification in the recorded breathing trace leads to spatial artefacts and this risk can be reduced by using a couch mounted infrared camera.
Advances in knowledge:This study explicitly shows that 4DCT scan accuracy is improved by scanning with a faster CT tube rotation speed.
Resumo:
The expanding remnant from SN 1987A is an excellent laboratory for investigating the physics of supernovae explosions. There is still a large number of outstanding questions, such as the reason for the asymmetric radio morphology, the structure of the pre-supernova environment, and the efficiency of particle acceleration at the supernova shock. We explore these questions using three-dimensional simulations of the expanding remnant between days 820 and 10,000 after the supernova. We combine a hydrodynamical simulation with semi-analytic treatments of diffusive shock acceleration and magnetic field amplification to derive radio emission as part of an inverse problem. Simulations show that an asymmetric explosion, combined with magnetic field amplification at the expanding shock, is able to replicate the persistent one-sided radio morphology of the remnant. We use an asymmetric Truelove & McKee progenitor with an envelope mass of 10 M-circle dot and an energy of 1.5 x 10(44) J. A termination shock in the progenitor's stellar wind at a distance of 0 ''.43-0 ''.51 provides a good fit to the turn on of radio emission around day 1200. For the H II region, a minimum distance of 0 ''.63 +/- 0 ''.01 and maximum particle number density of (7.11 +/- 1.78) x 10(7) m(-3) produces a good fit to the evolving average radius and velocity of the expanding shocks from day 2000 to day 7000 after explosion. The model predicts a noticeable reduction, and possibly a temporary reversal, in the asymmetric radio morphology of the remnant after day 7000, when the forward shock left the eastern lobe of the equatorial ring.
Resumo:
The electrochemical performance of one-dimensional porous La0.5Sr0.5CoO2.91 nanotubes as a cathode catalyst for rechargeable nonaqueous lithium-oxygen (Li-O2) batteries is reported here for the first time. In this study, one-dimensional porous La0.5Sr0.5CoO2.91 nanotubes were prepared by a simple and efficient electrospinning technique. These materials displayed an initial discharge capacity of 7205 mAh g-1 with a plateau at around 2.66 V at a current density of 100 mA g-1. It was found that the La0.5Sr0.5CoO2.91 nanotubes promoted both oxygen reduction and oxygen evolution reactions in alkaline media and a nonaqueous electrolyte, thereby improving the energy and coulombic efficiency of the Li-O2 batteries. The cyclability was maintained for 85 cycles without any sharp decay under a limited discharge depth of 1000 mAh g-1, suggesting that such a bifunctional electrocatalyst is a promising candidate for the oxygen electrode in Li-O2 batteries.
Resumo:
The widespread employment of carbon-epoxy laminates in high responsibility and severely loaded applications introduces an issue regarding their handling after damage. Repair of these structures should be evaluated, instead of their disposal, for cost saving and ecological purposes. Under this perspective, the availability of efficient repair methods is essential to restore the strength of the structure. The development and validation of accurate predictive tools for the repairs behaviour are also extremely important, allowing the reduction of costs and time associated to extensive test programmes. Comparing with strap repairs, scarf repairs have the advantages of a higher efficiency and the absence of aerodynamic disturbance. This work reports on a numerical study of the tensile behaviour of three-dimensional scarf repairs in carbon-epoxy structures, using a ductile adhesive (Araldite® 2015). The finite elements analysis was performed in ABAQUS® and Cohesive Zone Modelling was used for the simulation of damage onset and growth in the adhesive layer. Trapezoidal cohesive laws in each pure mode were used to account for the ductility of the specific adhesive mentioned. A parametric study was performed on the repair width and scarf angle. The use of over-laminating plies covering the repaired region at the outer or both repair surfaces was also tested as an attempt to increase the repairs efficiency. The obtained results allowed the proposal of design principles for repairing composite structures.
Resumo:
Ce mémoire de maîtrise présente une nouvelle approche non supervisée pour détecter et segmenter les régions urbaines dans les images hyperspectrales. La méthode proposée n ́ecessite trois étapes. Tout d’abord, afin de réduire le coût calculatoire de notre algorithme, une image couleur du contenu spectral est estimée. A cette fin, une étape de réduction de dimensionalité non-linéaire, basée sur deux critères complémentaires mais contradictoires de bonne visualisation; à savoir la précision et le contraste, est réalisée pour l’affichage couleur de chaque image hyperspectrale. Ensuite, pour discriminer les régions urbaines des régions non urbaines, la seconde étape consiste à extraire quelques caractéristiques discriminantes (et complémentaires) sur cette image hyperspectrale couleur. A cette fin, nous avons extrait une série de paramètres discriminants pour décrire les caractéristiques d’une zone urbaine, principalement composée d’objets manufacturés de formes simples g ́eométriques et régulières. Nous avons utilisé des caractéristiques texturales basées sur les niveaux de gris, la magnitude du gradient ou des paramètres issus de la matrice de co-occurrence combinés avec des caractéristiques structurelles basées sur l’orientation locale du gradient de l’image et la détection locale de segments de droites. Afin de réduire encore la complexité de calcul de notre approche et éviter le problème de la ”malédiction de la dimensionnalité” quand on décide de regrouper des données de dimensions élevées, nous avons décidé de classifier individuellement, dans la dernière étape, chaque caractéristique texturale ou structurelle avec une simple procédure de K-moyennes et ensuite de combiner ces segmentations grossières, obtenues à faible coût, avec un modèle efficace de fusion de cartes de segmentations. Les expérimentations données dans ce rapport montrent que cette stratégie est efficace visuellement et se compare favorablement aux autres méthodes de détection et segmentation de zones urbaines à partir d’images hyperspectrales.
Resumo:
The creation of three-dimensionally engineered nanoporous architectures via covalently interconnected nanoscale building blocks remains one of the fundamental challenges in nanotechnology. Here we report the synthesis of ordered, stacked macroscopic three-dimensional (3D) solid scaffolds of graphene oxide (GO) fabricated via chemical cross-linking of two-dimensional GO building blocks. The resulting 3D GO network solids form highly porous interconnected structures, and the controlled reduction of these structures leads to formation of 3D conductive graphene scaffolds. These 3D architectures show promise for potential applications such as gas storage; CO2 gas adsorption measurements carried out under ambient conditions show high sorption capacity, demonstrating the possibility of creating new functional carbon solids starting with two-dimensional carbon layers
Resumo:
This report explores how recurrent neural networks can be exploited for learning high-dimensional mappings. Since recurrent networks are as powerful as Turing machines, an interesting question is how recurrent networks can be used to simplify the problem of learning from examples. The main problem with learning high-dimensional functions is the curse of dimensionality which roughly states that the number of examples needed to learn a function increases exponentially with input dimension. This thesis proposes a way of avoiding this problem by using a recurrent network to decompose a high-dimensional function into many lower dimensional functions connected in a feedback loop.
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observed for each individual. A particular case of FDA is when the observed functions are density functions, that are also an example of infinite dimensional compositional data. In this work we compare several methods for dimensionality reduction for this particular type of data: functional principal components analysis (PCA) with or without a previous data transformation and multidimensional scaling (MDS) for diferent inter-densities distances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (households income distributions)
Resumo:
The problem of state estimation occurs in many applications of fluid flow. For example, to produce a reliable weather forecast it is essential to find the best possible estimate of the true state of the atmosphere. To find this best estimate a nonlinear least squares problem has to be solved subject to dynamical system constraints. Usually this is solved iteratively by an approximate Gauss–Newton method where the underlying discrete linear system is in general unstable. In this paper we propose a new method for deriving low order approximations to the problem based on a recently developed model reduction method for unstable systems. To illustrate the theoretical results, numerical experiments are performed using a two-dimensional Eady model – a simple model of baroclinic instability, which is the dominant mechanism for the growth of storms at mid-latitudes. It is a suitable test model to show the benefit that may be obtained by using model reduction techniques to approximate unstable systems within the state estimation problem.
Resumo:
During winter the ocean surface in polar regions freezes over to form sea ice. In the summer the upper layers of sea ice and snow melts producing meltwater that accumulates in Arctic melt ponds on the surface of sea ice. An accurate estimate of the fraction of the sea ice surface covered in melt ponds is essential for a realistic estimate of the albedo for global climate models. We present a melt-pond–sea-ice model that simulates the three-dimensional evolution of melt ponds on an Arctic sea ice surface. The advancements of this model compared to previous models are the inclusion of snow topography; meltwater transport rates are calculated from hydraulic gradients and ice permeability; and the incorporation of a detailed one-dimensional, thermodynamic radiative balance. Results of model runs simulating first-year and multiyear sea ice are presented. Model results show good agreement with observations, with duration of pond coverage, pond area, and ice ablation comparing well for both the first-year ice and multiyear ice cases. We investigate the sensitivity of the melt pond cover to changes in ice topography, snow topography, and vertical ice permeability. Snow was found to have an important impact mainly at the start of the melt season, whereas initial ice topography strongly controlled pond size and pond fraction throughout the melt season. A reduction in ice permeability allowed surface flooding of relatively flat, first-year ice but had little impact on the pond coverage of rougher, multiyear ice. We discuss our results, including model shortcomings and areas of experimental uncertainty.
Resumo:
It is known that the empirical orthogonal function method is unable to detect possible nonlinear structure in climate data. Here, isometric feature mapping (Isomap), as a tool for nonlinear dimensionality reduction, is applied to 1958–2001 ERA-40 sea-level pressure anomalies to study nonlinearity of the Asian summer monsoon intraseasonal variability. Using the leading two Isomap time series, the probability density function is shown to be bimodal. A two-dimensional bivariate Gaussian mixture model is then applied to identify the monsoon phases, the obtained regimes representing enhanced and suppressed phases, respectively. The relationship with the large-scale seasonal mean monsoon indicates that the frequency of monsoon regime occurrence is significantly perturbed in agreement with conceptual ideas, with preference for enhanced convection on intraseasonal time scales during large-scale strong monsoons. Trend analysis suggests a shift in concentration of monsoon convection, with less emphasis on South Asia and more on the East China Sea.
Resumo:
Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.
Resumo:
Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pretraining of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.