906 resultados para Multi-dimensional scaling
Resumo:
With the introduction of new input devices, such as multi-touch surface displays, the Nintendo WiiMote, the Microsoft Kinect, and the Leap Motion sensor, among others, the field of Human-Computer Interaction (HCI) finds itself at an important crossroads that requires solving new challenges. Given the amount of three-dimensional (3D) data available today, 3D navigation plays an important role in 3D User Interfaces (3DUI). This dissertation deals with multi-touch, 3D navigation, and how users can explore 3D virtual worlds using a multi-touch, non-stereo, desktop display. The contributions of this dissertation include a feature-extraction algorithm for multi-touch displays (FETOUCH), a multi-touch and gyroscope interaction technique (GyroTouch), a theoretical model for multi-touch interaction using high-level Petri Nets (PeNTa), an algorithm to resolve ambiguities in the multi-touch gesture classification process (Yield), a proposed technique for navigational experiments (FaNS), a proposed gesture (Hold-and-Roll), and an experiment prototype for 3D navigation (3DNav). The verification experiment for 3DNav was conducted with 30 human-subjects of both genders. The experiment used the 3DNav prototype to present a pseudo-universe, where each user was required to find five objects using the multi-touch display and five objects using a game controller (GamePad). For the multi-touch display, 3DNav used a commercial library called GestureWorks in conjunction with Yield to resolve the ambiguity posed by the multiplicity of gestures reported by the initial classification. The experiment compared both devices. The task completion time with multi-touch was slightly shorter, but the difference was not statistically significant. The design of experiment also included an equation that determined the level of video game console expertise of the subjects, which was used to break down users into two groups: casual users and experienced users. The study found that experienced gamers performed significantly faster with the GamePad than casual users. When looking at the groups separately, casual gamers performed significantly better using the multi-touch display, compared to the GamePad. Additional results are found in this dissertation.
Resumo:
The gravitationally confined detonation (GCD) model has been proposed as a possible explosion mechanism for Type Ia supernovae in the single-degenerate evolution channel. It starts with ignition of a deflagration in a single off-centre bubble in a near-Chandrasekhar-mass white dwarf. Driven by buoyancy, the deflagration flame rises in a narrow cone towards the surface. For the most part, the main component of the flow of the expanding ashes remains radial, but upon reaching the outer, low-pressure layers of the white dwarf, an additional lateral component develops. This causes the deflagration ashes to converge again at the opposite side, where the compression heats fuel and a detonation may be launched. We first performed five three-dimensional hydrodynamic simulations of the deflagration phase in 1.4 M⊙ carbon/oxygen white dwarfs at intermediate-resolution (2563computational zones). We confirm that the closer the initial deflagration is ignited to the centre, the slower the buoyant rise and the longer the deflagration ashes takes to break out and close in on the opposite pole to collide. To test the GCD explosion model, we then performed a high-resolution (5123 computational zones) simulation for a model with an ignition spot offset near the upper limit of what is still justifiable, 200 km. This high-resolution simulation met our deliberately optimistic detonation criteria, and we initiated a detonation. The detonation burned through the white dwarf and led to its complete disruption. For this model, we determined detailed nucleosynthetic yields by post-processing 106 tracer particles with a 384 nuclide reaction network, and we present multi-band light curves and time-dependent optical spectra. We find that our synthetic observables show a prominent viewing-angle sensitivity in ultraviolet and blue wavelength bands, which contradicts observed SNe Ia. The strong dependence on the viewing angle is caused by the asymmetric distribution of the deflagration ashes in the outer ejecta layers. Finally, we compared our model to SN 1991T. The overall flux level of the model is slightly too low, and the model predicts pre-maximum light spectral features due to Ca, S, and Si that are too strong. Furthermore, the model chemical abundance stratification qualitatively disagrees with recent abundance tomography results in two key areas: our model lacks low-velocity stable Fe and instead has copious amounts of high-velocity 56Ni and stable Fe. We therefore do not find good agreement of the model with SN 1991T.
Resumo:
Person re-identification involves recognizing a person across non-overlapping camera views, with different pose, illumination, and camera characteristics. We propose to tackle this problem by training a deep convolutional network to represent a person’s appearance as a low-dimensional feature vector that is invariant to common appearance variations encountered in the re-identification problem. Specifically, a Siamese-network architecture is used to train a feature extraction network using pairs of similar and dissimilar images. We show that use of a novel multi-task learning objective is crucial for regularizing the network parameters in order to prevent over-fitting due to the small size the training dataset. We complement the verification task, which is at the heart of re-identification, by training the network to jointly perform verification, identification, and to recognise attributes related to the clothing and pose of the person in each image. Additionally, we show that our proposed approach performs well even in the challenging cross-dataset scenario, which may better reflect real-world expected performance.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This paper presents a three dimensional, thermos-mechanical modelling approach to the cooling and solidification phases associated with the shape casting of metals ei. Die, sand and investment casting. Novel vortex-based Finite Volume (FV) methods are described and employed with regard to the small strain, non-linear Computational Solid Mechanics (CSM) capabilities required to model shape casting. The CSM capabilities include the non-linear material phenomena of creep and thermo-elasto-visco-plasticity at high temperatures and thermo-elasto-visco-plasticity at low temperatures and also multi body deformable contact with which can occur between the metal casting of the mould. The vortex-based FV methods, which can be readily applied to unstructured meshes, are included within a comprehensive FV modelling framework, PHYSICA. The additional heat transfer, by conduction and convection, filling, porosity and solidification algorithms existing within PHYSICA for the complete modelling of all shape casting process employ cell-centred FV methods. The termo-mechanical coupling is performed in a staggered incremental fashion, which addresses the possible gap formation between the component and the mould, and is ultimately validated against a variety of shape casting benchmarks.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The difficulties encountered in implementing large scale CM codes on multiprocessor systems are now fairly well understood. Despite the claims of shared memory architecture manufacturers to provide effective parallelizing compilers, these have not proved to be adequate for large or complex programs. Significant programmer effort is usually required to achieve reasonable parallel efficiencies on significant numbers of processors. The paradigm of Single Program Multi Data (SPMD) domain decomposition with message passing, where each processor runs the same code on a subdomain of the problem, communicating through exchange of messages, has for some time been demonstrated to provide the required level of efficiency, scalability, and portability across both shared and distributed memory systems, without the need to re-author the code into a new language or even to support differing message passing implementations. Extension of the methods into three dimensions has been enabled through the engineering of PHYSICA, a framework for supporting 3D, unstructured mesh and continuum mechanics modeling. In PHYSICA, six inspectors are used. Part of the challenge for automation of parallelization is being able to prove the equivalence of inspectors so that they can be merged into as few as possible.
Resumo:
As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.
Resumo:
In this paper we consider instabilities of localised solutions in planar neural field firing rate models of Wilson-Cowan or Amari type. Importantly we show that angular perturbations can destabilise spatially localised solutions. For a scalar model with Heaviside firing rate function we calculate symmetric one-bump and ring solutions explicitly and use an Evans function approach to predict the point of instability and the shapes of the dominant growing modes. Our predictions are shown to be in excellent agreement with direct numerical simulations. Moreover, beyond the instability our simulations demonstrate the emergence of multi-bump and labyrinthine patterns. With the addition of spike-frequency adaptation, numerical simulations of the resulting vector model show that it is possible for structures without rotational symmetry, and in particular multi-bumps, to undergo an instability to a rotating wave. We use a general argument, valid for smooth firing rate functions, to establish the conditions necessary to generate such a rotational instability. Numerical continuation of the rotating wave is used to quantify the emergent angular velocity as a bifurcation parameter is varied. Wave stability is found via the numerical evaluation of an associated eigenvalue problem.
Resumo:
As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.
Resumo:
Los problemas de corte y empaquetado son una familia de problemas de optimización combinatoria que han sido ampliamente estudiados en numerosas áreas de la industria y la investigación, debido a su relevancia en una enorme variedad de aplicaciones reales. Son problemas que surgen en muchas industrias de producción donde se debe realizar la subdivisión de un material o espacio disponible en partes más pequeñas. Existe una gran variedad de métodos para resolver este tipo de problemas de optimización. A la hora de proponer un método de resolución para un problema de optimización, es recomendable tener en cuenta el enfoque y las necesidades que se tienen en relación al problema y su solución. Las aproximaciones exactas encuentran la solución óptima, pero sólo es viable aplicarlas a instancias del problema muy pequeñas. Las heurísticas manejan conocimiento específico del problema para obtener soluciones de alta calidad sin necesitar un excesivo esfuerzo computacional. Por otra parte, las metaheurísticas van un paso más allá, ya que son capaces de resolver una clase muy general de problemas computacionales. Finalmente, las hiperheurísticas tratan de automatizar, normalmente incorporando técnicas de aprendizaje, el proceso de selección, combinación, generación o adaptación de heurísticas más simples para resolver eficientemente problemas de optimización. Para obtener lo mejor de estos métodos se requiere conocer, además del tipo de optimización (mono o multi-objetivo) y el tamaño del problema, los medios computacionales de los que se dispone, puesto que el uso de máquinas e implementaciones paralelas puede reducir considerablemente los tiempos para obtener una solución. En las aplicaciones reales de los problemas de corte y empaquetado en la industria, la diferencia entre usar una solución obtenida rápidamente y usar propuestas más sofisticadas para encontrar la solución óptima puede determinar la supervivencia de la empresa. Sin embargo, el desarrollo de propuestas más sofisticadas y efectivas normalmente involucra un gran esfuerzo computacional, que en las aplicaciones reales puede provocar una reducción de la velocidad del proceso de producción. Por lo tanto, el diseño de propuestas efectivas y, al mismo tiempo, eficientes es fundamental. Por esta razón, el principal objetivo de este trabajo consiste en el diseño e implementación de métodos efectivos y eficientes para resolver distintos problemas de corte y empaquetado. Además, si estos métodos se definen como esquemas lo más generales posible, se podrán aplicar a diferentes problemas de corte y empaquetado sin realizar demasiados cambios para adaptarlos a cada uno. Así, teniendo en cuenta el amplio rango de metodologías de resolución de problemas de optimización y las técnicas disponibles para incrementar su eficiencia, se han diseñado e implementado diversos métodos para resolver varios problemas de corte y empaquetado, tratando de mejorar las propuestas existentes en la literatura. Los problemas que se han abordado han sido: el Two-Dimensional Cutting Stock Problem, el Two-Dimensional Strip Packing Problem, y el Container Loading Problem. Para cada uno de estos problemas se ha realizado una amplia y minuciosa revisión bibliográfica, y se ha obtenido la solución de las distintas variantes escogidas aplicando diferentes métodos de resolución: métodos exactos mono-objetivo y paralelizaciones de los mismos, y métodos aproximados multi-objetivo y paralelizaciones de los mismos. Los métodos exactos mono-objetivo aplicados se han basado en técnicas de búsqueda en árbol. Por otra parte, como métodos aproximados multi-objetivo se han seleccionado unas metaheurísticas multi-objetivo, los MOEAs. Además, para la representación de los individuos utilizados por estos métodos se han empleado codificaciones directas mediante una notación postfija, y codificaciones que usan heurísticas de colocación e hiperheurísticas. Algunas de estas metodologías se han mejorado utilizando esquemas paralelos haciendo uso de las herramientas de programación OpenMP y MPI.
Resumo:
One-dimensional nanostructures initiated new aspects to the materials applications due to their superior properties compared to the bulk materials. Properties of nanostructures have been characterized by many techniques and used for various device applications. However, simultaneous correlation between the physical and structural properties of these nanomaterials has not been widely investigated. Therefore, it is necessary to perform in-situ study on the physical and structural properties of nanomaterials to understand their relation. In this work, we will use a unique instrument to perform real time atomic force microscopy (AFM) and scanning tunneling microscopy (STM) of nanomaterials inside a transmission electron microscopy (TEM) system. This AFM/STM-TEM system is used to investigate the mechanical, electrical, and electrochemical properties of boron nitride nanotubes (BNNTs) and Silicon nanorods (SiNRs). BNNTs are one of the subjects of this PhD research due to their comparable, and in some cases superior, properties compared to carbon nanotubes. Therefore, to further develop their applications, it is required to investigate these characteristics in atomic level. In this research, the mechanical properties of multi-walled BNNTs were first studied. Several tests were designed to study and characterize their real-time deformation behavior to the applied force. Observations revealed that BNNTs possess highly flexible structures under applied force. Detailed studies were then conducted to understand the bending mechanism of the BNNTs. Formations of reversible ripples were observed and described in terms of thermodynamic energy of the system. Fracture failure of BNNTs were initiated at the outermost walls and characterized to be brittle. Second, the electrical properties of individual BNNTs were studied. Results showed that the bandgap and electronic properties of BNNTs can be engineered by means of applied strain. It was found that the conductivity, electron concentration and carrier mobility of BNNTs can be tuned as a function of applied stress. Although, BNNTs are considered to be candidate for field emission applications, observations revealed that their properties degrade upon cycles of emissions. Results showed that due to the high emission current density, the temperature of the sample was increased and reached to the decomposition temperature at which the B-N bonds start to break. In addition to BNNTs, we have also performed in-situ study on the electrochemical properties of silicon nanorods (SiNRs). Specifically, lithiation and delithiation of SiNRs were studied by our STM-TEM system. Our observations showed the direct formation of Li22Si5 phases as a result of lithium intercalation. Radial expansion of the anode materials were observed and characterized in terms of size-scale. Later, the formation and growth of the lithium fibers on the surface of the anode materials were observed and studied. Results revealed the formation of lithium islands inside the ionic liquid electrolyte which then grew as Li dendrite toward the cathode material.
Resumo:
By virtue of its proximity and richness, the Virgo galaxy cluster is a perfect testing ground to expand our understanding of structure formation in the Universe. Here, we present a comprehensive dynamical catalogue based on 190 Virgo cluster galaxies (VCGs) in the "Spectroscopy and H-band Imaging of the Virgo cluster" (SHIVir) survey, including kinematics and dynamical masses. Spectroscopy collected over a multi-year campaign on 4-8m telescopes was joined with optical and near-infrared imaging to create a cosmologically-representative overview of parameter distributions and scaling relations describing galaxy evolution in a rich cluster environment. The use of long-slit spectroscopy has allowed the extraction and systematic analysis of resolved kinematic profiles: Halpha rotation curves for late-type galaxies (LTGs), and velocity dispersion profiles for early-type galaxies (ETGs). The latter are shown to span a wide range of profile shapes which correlate with structural, morphological, and photometric parameters. A study of the distributions of surface brightnesses and circular velocities for ETGs and LTGs considered separately show them all to be strongly bimodal, hinting at the existence of dynamically unstable modes where the baryon and dark matter fractions may be comparable within the inner regions of galaxies. Both our Tully-Fisher relation for LTGs and Fundamental Plane analysis for ETGs exhibit the smallest scatter when a velocity metric probing the galaxy at larger radii (where the baryonic fraction becomes sub-dominant) is used: rotational velocity measured in the outer disc at the 23.5 i-mag arcsec^{-2} level, and velocity dispersion measured within an aperture of 2 effective radii, respectively. Dynamical estimates for gas-poor and gas-rich VCGs are merged into a joint analysis of the stellar-to-total mass relation (STMR), stellar TFR, and Mass-Size relation. These relations are all found to contain strong bimodalities or dichotomies between the ETG and LTG samples, alluding to a "mixed scenario'' evolutionary sequence between morphological/dynamical classes that involves both quenching and dry mergers. The unmistakable differentiation between these two galaxy classes appears robust against different classification schemes, and supports the notion that they are driven by different evolutionary histories. Future observations using integral field spectroscopy and including lower-mass galaxies should solidify this hypothesis.
Progetto di Sistemi di Regolazione dell'Alimentazione ad Alta Affidabilità per Processori Multi-Core
Resumo:
Quasi tutti i componenti del FIVR (regolatore di tensione Buck che fornisce l'alimentazione ai microprocessori multi-core) sono implementati sul die del SoC e quindi soffrono di problemi di affidabilità associati allo scaling della tecnologia microelettronica. In particolare, la variazione dei parametri di processo durante la fabbricazione e i guasti nei dispostivi di switching (circuiti aperti o cortocircuiti). Questa tesi si svolge in ambito di un progetto di ricerca in collaborazione con Intel Corporation, ed è stato sviluppato in due parti: Inizialmente è stato arricchito il lavoro di analisi dei guasti su FIVR, svolgendo un accurato studio su quelli che sono i principali effetti dell’invecchiamento sulle uscite dei regolatori di tensione integrati su chip. Successivamente è stato sviluppato uno schema di monitoraggio a basso costo in grado di rilevare gli effetti dei guasti più probabili del FIVR sul campo. Inoltre, lo schema sviluppato è in grado di rilevare, durante il tempo di vita del FIVR, gli effetti di invecchiamento che inducono un incorretto funzionamento del FIVR. Lo schema di monitoraggio è stato progettato in maniera tale che risulti self-checking nei confronti dei suoi guasti interni, questo per evitare che tali errori possano compromettere la corretta segnalazione di guasti sul FIVR.