920 resultados para Spherical Geometry
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
Sub-ice shelf circulation and freezing/melting rates in ocean general circulation models depend critically on an accurate and consistent representation of cavity geometry. Existing global or pan-Antarctic data sets have turned out to contain various inconsistencies and inaccuracies. The goal of this work is to compile independent regional fields into a global data set. We use the S-2004 global 1-minute bathymetry as the backbone and add an improved version of the BEDMAP topography for an area that roughly coincides with the Antarctic continental shelf. Locations of the merging line have been carefully adjusted in order to get the best out of each data set. High-resolution gridded data for upper and lower ice surface topography and cavity geometry of the Amery, Fimbul, Filchner-Ronne, Larsen C and George VI Ice Shelves, and for Pine Island Glacier have been carefully merged into the ambient ice and ocean topographies. Multibeam survey data for bathymetry in the former Larsen B cavity and the southeastern Bellingshausen Sea have been obtained from the data centers of Alfred Wegener Institute (AWI), British Antarctic Survey (BAS) and Lamont-Doherty Earth Observatory (LDEO), gridded, and again carefully merged into the existing bathymetry map. The global 1-minute dataset (RTopo-1 Version 1.0.5) has been split into two netCDF files. The first contains digital maps for global bedrock topography, ice bottom topography, and surface elevation. The second contains the auxiliary maps for data sources and the surface type mask. A regional subset that covers all variables for the region south of 50 deg S is also available in netCDF format. Datasets for the locations of grounding and coast lines are provided in ASCII format.
Resumo:
Coccolithophores are a key phytoplankton group that exhibit remarkable diversity in their biology, ecology, and calcitic exoskeletons (coccospheres). An understanding of the physiological processes that underpin coccosphere architecture is essential for maximizing the information that can be retrieved from their extensive fossil record. Using culturing experiments on four modern species from three long-lived families, we investigate how coccosphere architecture responds to population shifts from rapid (exponential) to slowed (stationary) growth phases as nutrients become depleted. These experiments reveal statistical differences in cell size and the number of coccoliths per cell between these two growth phases, specifically that cells in exponential-phase growth are typically smaller with fewer coccoliths, whereas cells experiencing growth-limiting nutrient depletion have larger coccosphere sizes and greater numbers of coccoliths per cell. Although the exact numbers are species-specific, these growth-phase shifts in coccosphere geometry are common to four different coccolithophore families (Calcidiscaceae, Coccolithaceae, Isochrysidaceae, Helicosphaeraceae), demonstrating that this is a core physiological response to nutrient depletion across a representative diversity of this phytoplankton group. Polarised light microscopy was used for all coccosphere geometry measurements.
Resumo:
A recently developed novel biomass fuel pellet, the Q’ Pellet, offers significant improvements over conventional white pellets, with characteristics comparable to those of coal. The Q’ Pellet was initially created at bench scale using a proprietary die and punch design, in which the biomass was torrefied in-situ¬ and then compressed. To bring the benefits of the Q’ Pellet to a commercial level, it must be capable of being produced in a continuous process at a competitive cost. A prototype machine was previously constructed in a first effort to assess continuous processing of the Q’ Pellet. The prototype torrefied biomass in a separate, ex-situ reactor and transported it into a rotary compression stage. Upon evaluation, parts of the prototype were found to be unsuccessful and required a redesign of the material transport method as well as the compression mechanism. A process was developed in which material was torrefied ex-situ and extruded in a pre-compression stage. The extruded biomass overcame multiple handling issues that had been experienced with un-densified biomass, facilitating efficient material transport. Biomass was extruded directly into a novel re-designed pelletizing die, which incorporated a removable cap, ejection pin and a die spring to accommodate a repeatable continuous process. Although after several uses the die required manual intervention due to minor design and manufacturing quality limitations, the system clearly demonstrated the capability of producing the Q’ Pellet in a continuous process. Q’ Pellets produced by the pre-compression method and pelletized in the re-designed die had an average dry basis gross calorific value of 22.04 MJ/kg, pellet durability index of 99.86% and dried to 6.2% of its initial mass following 24 hours submerged in water. This compares well with literature results of 21.29 MJ/kg, 100% pellet durability index and <5% mass increase in a water submersion test. These results indicate that the methods developed herein are capable of producing Q’ Pellets in a continuous process with fuel properties competitive with coal.
Resumo:
Let $M$ be a compact, oriented, even dimensional Riemannian manifold and let $S$ be a Clifford bundle over $M$ with Dirac operator $D$. Then \[ \textsc{Atiyah Singer: } \quad \text{Ind } \mathsf{D}= \int_M \hat{\mathcal{A}}(TM)\wedge \text{ch}(\mathcal{V}) \] where $\mathcal{V} =\text{Hom}_{\mathbb{C}l(TM)}(\slashed{\mathsf{S}},S)$. We prove the above statement with the means of the heat kernel of the heat semigroup $e^{-tD^2}$. The first outstanding result is the McKean-Singer theorem that describes the index in terms of the supertrace of the heat kernel. The trace of heat kernel is obtained from local geometric information. Moreover, if we use the asymptotic expansion of the kernel we will see that in the computation of the index only one term matters. The Berezin formula tells us that the supertrace is nothing but the coefficient of the Clifford top part, and at the end, Getzler calculus enables us to find the integral of these top parts in terms of characteristic classes.
Resumo:
Electrospun nanofibers are a promising material for ligamentous tissue engineering, however weak mechanical properties of fibers to date have limited their clinical usage. The goal of this work was to modify electrospun nanofibers to create a robust structure that mimics the complex hierarchy of native tendons and ligaments. The scaffolds that were fabricated in this study consisted of either random or aligned nanofibers in flat sheets or rolled nanofiber bundles that mimic the size scale of fascicle units in primarily tensile load bearing soft musculoskeletal tissues. Altering nanofiber orientation and geometry significantly affected mechanical properties; most notably aligned nanofiber sheets had the greatest modulus; 125% higher than that of random nanofiber sheets; and 45% higher than aligned nanofiber bundles. Modifying aligned nanofiber sheets to form aligned nanofiber bundles also resulted in approximately 107% higher yield stresses and 140% higher yield strains. The mechanical properties of aligned nanofiber bundles were in the range of the mechanical properties of the native ACL: modulus=158±32MPa, yield stress=57±23MPa and yield strain=0.38±0.08. Adipose derived stem cells cultured on all surfaces remained viable and proliferated extensively over a 7 day culture period and cells elongated on nanofiber bundles. The results of the study suggest that aligned nanofiber bundles may be useful for ligament and tendon tissue engineering based on their mechanical properties and ability to support cell adhesion, proliferation, and elongation.
Resumo:
Recent proxy measurements reveal that subglacial lakes beneath modern ice sheets periodically store and release large volumes of water, providing an important but poorly understood influence on contemporary ice dynamics and mass balance. This is because direct observations of how lake drainage initiates and proceeds are lacking. Here we present physical evidence of the mechanism and geometry of lake drainage from the discovery of relict subglacial lakes formed during the last glaciation in Canada. These palaeo-subglacial lakes comprised shallow (<10 m) lenses of water perched behind ridges orientated transverse to ice flow. We show that lakes periodically drained through channels incised into bed substrate (canals). Canals sometimes trend into eskers that represent the depositional imprint of the last high-magnitude lake outburst. The subglacial lakes and channels are preserved on top of glacial lineations, indicating long-term re-organization of the subglacial drainage system and coupling to ice flow.
Resumo:
During the epoch when the first collapsed structures formed (6<z<50) our Universe went through an extended period of changes. Some of the radiation from the first stars and accreting black holes in those structures escaped and changed the state of the Intergalactic Medium (IGM). The era of this global phase change in which the state of the IGM was transformed from cold and neutral to warm and ionized, is called the Epoch of Reionization.In this thesis we focus on numerical methods to calculate the effects of this escaping radiation. We start by considering the performance of the cosmological radiative transfer code C2-Ray. We find that although this code efficiently and accurately solves for the changes in the ionized fractions, it can yield inaccurate results for the temperature changes. We introduce two new elements to improve the code. The first element, an adaptive time step algorithm, quickly determines an optimal time step by only considering the computational cells relevant for this determination. The second element, asynchronous evolution, allows different cells to evolve with different time steps. An important constituent of methods to calculate the effects of ionizing radiation is the transport of photons through the computational domain or ``ray-tracing''. We devise a novel ray tracing method called PYRAMID which uses a new geometry - the pyramidal geometry. This geometry shares properties with both the standard Cartesian and spherical geometries. This makes it on the one hand easy to use in conjunction with a Cartesian grid and on the other hand ideally suited to trace radiation from a radially emitting source. A time-dependent photoionization calculation not only requires tracing the path of photons but also solving the coupled set of photoionization and thermal equations. Several different solvers for these equations are in use in cosmological radiative transfer codes. We conduct a detailed and quantitative comparison of four different standard solvers in which we evaluate how their accuracy depends on the choice of the time step. This comparison shows that their performance can be characterized by two simple parameters and that the C2-Ray generally performs best.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Quenched and tempered high-speed steels obtained by powder metallurgy are commonly used in automotive components, such as valve seats of combustion engines. In order to machine these components, tools with high wear resistance and appropriate cutting edge geometry are required. This work aims to investigate the influence of the edge preparation of polycrystalline cubic boron nitride (PCBN) tools on the wear behavior in the orthogonal longitudinal turning of quenched and tempered M2 high-speed steels obtained by powder metallurgy. For this research, PCBN tools with high and low-CBN content have been used. Two different cutting edge geometries with a honed radius were tested: with a ground land (S shape) and without it (E shape). Also, the cutting speed was varied from 100 to 220 m/min. A rigid CNC lathe was used. The results showed that the high-CBN, E-shaped tool presented the longest life for a cutting speed of 100 m/min. High-CBN tools with a ground land and honed edge radius (S shaped) showed edge damage and lower values of the tool’s life. Low-CBN, S-shaped tools showed similar results, but with an inferior performance when compared with tools with high CBN content in both forms of edge preparation.
Resumo:
Cette thèse propose de développer des mécanismes déployables pour applications spatiales ainsi que des modes d’actionnement permettant leur déploiement et le contrôle de l’orientation en orbite de l’engin spatial les supportant. L’objectif étant de permettre le déploiement de surfaces larges pour des panneaux solaires, coupoles de télécommunication ou sections de station spatiale, une géométrie plane simple en triangle est retenue afin de pouvoir être assemblée en différents types de surfaces. Les configurations à membrures rigides proposées dans la littérature pour le déploiement de solides symétriques sont optimisées et adaptées à l’expansion d’une géométrie ouverte, telle une coupole. L’optimisation permet d’atteindre un ratio d’expansion plan pour une seule unité de plus de 5, mais présente des instabilités lors de l’actionnement d’un prototype. Le principe de transmission du mouvement d’un étage à l’autre du mécanisme est revu afin de diminuer la sensibilité des performances du mécanisme à la géométrie de ses membrures internes. Le nouveau modèle, basé sur des courroies crantées, permet d’atteindre des ratios d’expansion plans supérieurs à 20 dans certaines configurations. L’effet des principaux facteurs géométriques de conception est étudié afin d’obtenir une relation simple d’optimisation du mécanisme plan pour adapter ce dernier à différents contextes d’applications. La géométrie identique des faces triangulaires de chaque surface déployée permet aussi l’empilement de ces faces pour augmenter la compacité du mécanisme. Une articulation spécialisée est conçue afin de permettre le dépliage des faces puis leur déploiement successivement. Le déploiement de grandes surfaces ne se fait pas sans influencer lourdement l’orientation et potentiellement la trajectoire de l’engin spatial, aussi, différentes stratégies de contrôle de l’orientation novatrices sont proposées. Afin de tirer profit d’une grande surface, l’actionnement par masses ponctuelles en périphérie du mécanisme est présentée, ses équations dynamiques sont dérivées et simulées pour en observer les performances. Celles-ci démontrent le potentiel de cette stratégie de réorientation, sans obstruction de l’espace central du satellite de base, mais les performances restent en deçà de l’effet d’une roue d’inertie de masse équivalente. Une stratégie d’actionnement redondant par roue d’inertie est alors présentée pour différents niveaux de complexité de mécanismes dont toutes les articulations sont passives, c’est-à-dire non actionnées. Un mécanisme à quatre barres plan est simulé en boucle fermée avec un contrôleur simple pour valider le contrôle d’un mécanisme ciseau commun. Ces résultats sont étendus à la dérivation des équations dynamiques d’un mécanisme sphérique à quatre barres, qui démontre le potentiel de l’actionnement par roue d’inertie pour le contrôle de la configuration et de l’orientation spatiale d’un tel mécanisme. Un prototype à deux corps ayant chacun une roue d’inertie et une seule articulation passive les reliant est réalisé et contrôlé grâce à un suivi par caméra des modules. Le banc d’essai est détaillé, ainsi que les défis que l’élimination des forces externes ont représenté dans sa conception. Les résultats montrent que le système est contrôlable en orientation et en configuration. La thèse se termine par une étude de cas pour l’application des principaux systèmes développés dans cette recherche. La collecte de débris orbitaux de petite et moyenne taille est présentée comme un problème n’ayant pas encore eu de solution adéquate et posant un réel danger aux missions spatiales à venir. L’unité déployable triangulaire entraînée par courroies est dupliquée de manière à former une coupole de plusieurs centaines de mètres de diamètre et est proposée comme solution pour capturer et ralentir ces catégories de débris. Les paramètres d’une mission à cette fin sont détaillés, ainsi que le potentiel de réorientation que les roues d’inertie permettent en plus du contrôle de son déploiement. Près de 2000 débris pourraient être retirés en moins d’un an en orbite basse à 819 km d’altitude.
Resumo:
Following the development of non-Euclidean geometries from the mid-nineteenth century onwards, Euclid’s system had come to be re-conceived as a language for describing reality rather than a set of transcendental laws. As Henri Poincaré famously put it, ‘[i]f several geometries are possible, is it certain that our geometry [...] is true?’. By examining Joyce’s linguistic play and conceptual engagement with ground-breaking geometric constructs in Ulysses and Finnegans Wake, this thesis explores how his topographical writing of place encapsulates a common crisis between geometric and linguistic modes of representation within the context of modernity. More specifically, it investigates how Joyce presents Euclidean geometry and its topographical applications as languages, rather than ideally objective systems, for describing visual reality; and how, conversely, he employs language figuratively to emulate the systems by which the world is commonly visualised. With reference to his early readings of Giordano Bruno, Henri Poincaré and other critics of the Euclidean tradition, it investigates how Joyce’s obsession with measuring and mapping space throughout his works enters into his more developed reflections on the codification of visual signs in Finnegans Wake. In particular, this thesis sheds new light on Joyce’s developing fascination with the ‘geometry of language’ practised by Bruno, whose massive influence on Joyce is often assumed to exist in Joyce studies yet is rarely explored in any great detail.