964 resultados para Machine-tools.
Resumo:
This thesis covers a range of topics in numerical and analytical relativity, centered around introducing tools and methodologies for the study of dynamical spacetimes. The scope of the studies is limited to classical (as opposed to quantum) vacuum spacetimes described by Einstein's general theory of relativity. The numerical works presented here are carried out within the Spectral Einstein Code (SpEC) infrastructure, while analytical calculations extensively utilize Wolfram's Mathematica program.
We begin by examining highly dynamical spacetimes such as binary black hole mergers, which can be investigated using numerical simulations. However, there are difficulties in interpreting the output of such simulations. One difficulty stems from the lack of a canonical coordinate system (henceforth referred to as gauge freedom) and tetrad, against which quantities such as Newman-Penrose Psi_4 (usually interpreted as the gravitational wave part of curvature) should be measured. We tackle this problem in Chapter 2 by introducing a set of geometrically motivated coordinates that are independent of the simulation gauge choice, as well as a quasi-Kinnersley tetrad, also invariant under gauge changes in addition to being optimally suited to the task of gravitational wave extraction.
Another difficulty arises from the need to condense the overwhelming amount of data generated by the numerical simulations. In order to extract physical information in a succinct and transparent manner, one may define a version of gravitational field lines and field strength using spatial projections of the Weyl curvature tensor. Introduction, investigation and utilization of these quantities will constitute the main content in Chapters 3 through 6.
For the last two chapters, we turn to the analytical study of a simpler dynamical spacetime, namely a perturbed Kerr black hole. We will introduce in Chapter 7 a new analytical approximation to the quasi-normal mode (QNM) frequencies, and relate various properties of these modes to wave packets traveling on unstable photon orbits around the black hole. In Chapter 8, we study a bifurcation in the QNM spectrum as the spin of the black hole a approaches extremality.
Resumo:
16 p.
Resumo:
[no abstract]
Resumo:
[ES]Los objetivos del siguiente trabajo consisten en analizar e optimizar el proceso del torneado en duro del acero ASP-23 indagando de especial manera en la realización de diferentes soluciones para brochas. En este caso, este proyecto nace de la importancia de reducir así como los costes económicos y los costes temporales de fabricación de elementos basados en el acero ASP-23 mediante el torneado en duro; proceso de mecanizado, cuya importancia cada vez es mayor como en las industrias de automoción o aeronáutica. El desarrollo del proyecto es fruto de la necesidad de EKIN S. Coop, uno de los líderes en los procesos de máquina-herramienta de alta precisión para el brochado, de desarrollar un proceso de mecanizado más eficaz de las brochas que produce. Así en el aula máquina-herramienta (ETSIB) se han intentado demostrar los beneficios que tiene el torneado en duro en el mecanizado del ASP-23. Hoy en día, con el rápido desarrollo de nuevos materiales, los procesos de fabricación se están haciendo cada vez más complejos, por la amplia variedad de maquinas con las que se realizan los procesos, por la variedad de geometría/material de las herramientas empleadas, por las propiedades del material de la pieza a mecanizar, por los parámetros de corte tan variados con los que podemos implementar el proceso (profundidad de corte, velocidad, alimentación...) y por la diversidad de elementos de sujeción utilizados. Además debemos ser conscientes de que tal variedad implica grandes magnitudes de deformaciones, velocidades y temperaturas. He aquí la justificación y el gran interés en el proyecto a realizar. Por ello, en este proyecto intentamos dar un pequeño paso en el conocimiento del proceso del torneado en duro de aceros con poca maquinabilidad, siendo conscientes de la amplia variedad y dificultad del avance en la ingeniería de fabricación y del mucho trabajo que queda por hacer.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.
Resumo:
In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.
In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.
Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.
In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.
Resumo:
The degradation of image quality caused by aberrations of projection optics in lithographic tools is a serious problem in optical lithography. We propose what we believe to be a novel technique for measuring aberrations of projection optics based on two-beam interference theory. By utilizing the partial coherent imaging theory, a novel model that accurately characterizes the relative image displacement of a fine grating pattern to a large pattern induced by aberrations is derived. Both even and odd aberrations are extracted independently from the relative image displacements of the printed patterns by two-beam interference imaging of the zeroth and positive first orders. The simulation results show that by using this technique we can measure the aberrations present in the lithographic tool with higher accuracy. (c) 2006 Optical Society of America.
Resumo:
As a critical dimension shrinks, the degradation in image quality caused by wavefront aberrations of projection optics in lithographic tools becomes a serious problem. It is necessary to establish a technique for a fast and accurate in situ aberration measurement. We introduce what we believe to be a novel technique for characterizing the aberrations of projection optics by using an alternating phase-shifting mask. The even aberrations, such as spherical aberration and astigmatism, and the odd aberrations, such as coma, are extracted from focus shifts and image displacements of the phase-shifted pattern, respectively. The focus shifts and the image displacements are measured by a transmission image sensor. The simulation results show that, compared with the accuracy of the previous straightforward measurement technique, the accuracy of the coma measurement increases by more than 30% and the accuracy of the spherical-aberration measurement increases by approximately 20%. (c) 2006 Optical Society of America.
Resumo:
As the feature size decreases, degradation of image quality caused by wavefront aberrations of projection optics in lithographic tools has become a serious problem in the low-k1 process. We propose a novel measurement technique for in situ characterizing aberrations of projection optics in lithographic tools. Considering the impact of the partial coherence illumination, we introduce a novel algorithm that accurately describes the pattern displacement and focus shift induced by aberrations. Employing the algorithm, the measurement condition is extended from three-beam interference to two-, three-, and hybrid-beam interferences. The experiments are performed to measure the aberrations of projection optics in an ArF scanner. (C) 2006 Optical Society of America.