942 resultados para Subpixel precision
Resumo:
Measurement of out-of-plane linear motion with high precision and bandwidth is indispensable for development of precision motion stages and for dynamic characterization of mechanical structures. This paper presents an optical beam deflection (OBD) based system for measurement of out-of-plane linear motion for fully reflective samples. The system also achieves nearly zero cross-sensitivity to angular motion, and a large working distance. The sensitivities to linear and angular motion are analytically obtained and employed to optimize the system design. The optimal shot-noise limited resolution is shown to be less than one angstrom over a bandwidth in excess of 1 kHz. Subsequently, the system is experimentally realized and the sensitivities to out-of-plane motions are calibrated using a novel strategy. The linear sensitivity is found to be in agreement with theory. The angular sensitivity is shown to be over 7.5-times smaller than that of conventional OBD. Finally, the measurement system is employed to measure the transient response of a piezo-positioner, and, with the aid of an open-loop controller, reduce the settling time by about 90%. It is also employed to operate the positioner in closed-loop and demonstrate significant minimization of hysteresis and positioning error.
Resumo:
It was demonstrated in earlier work that, by approximating its range kernel using shiftable functions, the nonlinear bilateral filter can be computed using a series of fast convolutions. Previous approaches based on shiftable approximation have, however, been restricted to Gaussian range kernels. In this work, we propose a novel approximation that can be applied to any range kernel, provided it has a pointwise-convergent Fourier series. More specifically, we propose to approximate the Gaussian range kernel of the bilateral filter using a Fourier basis, where the coefficients of the basis are obtained by solving a series of least-squares problems. The coefficients can be efficiently computed using a recursive form of the QR decomposition. By controlling the cardinality of the Fourier basis, we can obtain a good tradeoff between the run-time and the filtering accuracy. In particular, we are able to guarantee subpixel accuracy for the overall filtering, which is not provided by the most existing methods for fast bilateral filtering. We present simulation results to demonstrate the speed and accuracy of the proposed algorithm.
Resumo:
Extreme isotopic variations among extraterrestrial materials provide great insights into the origin and evolution of the Solar System. In this tutorial review, we summarize how the measurement of isotope ratios can expand our knowledge of the processes that took place before and during the formation of our Solar System and its subsequent early evolution. The continuous improvement of mass spectrometers with high precision and increased spatial resolution, including secondary ion mass spectrometry (SIMS), thermal ionization mass spectrometry (TIMS) and multi collector-inductively coupled plasma-mass spectrometry (MC-ICP-MS), along with the ever growing amounts of available extraterrestrial samples have significantly increased the temporal and spatial constraints on the sequence of events that took place since and before the formation of the first Solar System condensates (i.e., Ca-Al-rich inclusions). Grains sampling distinct stellar environments with a wide range of isotopic compositions were admixed to, but possibly not fully homogenized in, the Sun's parent molecular cloud or the nascent Solar System. Before, during and after accretion of the nebula, as well as the formation and subsequent evolution of planetesimals and planets, chemical and physical fractionation processes irrevocably changed the chemical and isotopic compositions of all Solar System bodies. Since the formation of the first Solar System minerals and rocks 4.568 Gyr ago, short-and long-lived radioactive decay and cosmic ray interaction also contributed to the modification of the isotopic framework of the Solar System, and permit to trace the formation and evolution of directly accessible and inferred planetary and stellar isotopic reservoirs.
Resumo:
Piezoelectric actuators are mounted on both sides of a rectangular wing model. Possibility of the improvement of aircraft rolling power is investigated. All experiment projects, including designing the wind tunnel model, checking the material constants, measuring the natural frequencies and checking the effects of actuators, guarantee the correctness and precision of the finite element model. The wind tunnel experiment results show that the calculations coincide with the experiments. The feasibility of fictitious control surface is validated.
Resumo:
It is now possible to improve the precision of well survey calculations by order of magnitude with numerical approximation.
Although the most precise method of simulating and calculating a wellbore trajectory generally requires more calculation than other, less-accurate methods, the wider use of computers in oil fields now eliminates this as an obstacle.
The results of various calculations show that there is a deviation of more than 10 m among the different methods of calculation for a directional well of 3,000 m.1 Consequently, it is important to improve the precision and reliability of survey calculation-the fundamental, necessary work of quantitatively monitoring and controlling wellbore trajectories.
Resumo:
The advent of nanotechnology has necessitated a better understanding of how material microstructure changes at the atomic level would affect the macroscopic properties that control the performance. Such a challenge has uncovered many phenomena that were not previously understood and taken for granted. Among them are the basic foundation of dislocation theories which are now known to be inadequate. Simplifying assumptions invoked at the macroscale may not be applicable at the micro- and/or nanoscale. There are implications of scaling hierrachy associated with in-homegeneity and nonequilibrium. of physical systems. What is taken to be homogeneous and equilibrium at the macroscale may not be so when the physical size of the material is reduced to microns. These fundamental issues cannot be dispensed at will for the sake of convenience because they could alter the outcome of predictions. Even more unsatisfying is the lack of consistency in modeling physical systems. This could translate to the inability for identifying the relevant manufacturing parameters and rendering the end product unpractical because of high cost. Advanced composite and ceramic materials are cases in point. Discussed are potential pitfalls for applying models at both the atomic and continuum levels. No encouragement is made to unravel the truth of nature. Let it be partiuclates, a smooth continuum or a combination of both. The present trend of development in scaling tends to seek for different characteristic lengths of material microstructures with or without the influence of time effects. Much will be learned from atomistic simulation models to show how results could differ as boundary conditions and scales are changed. Quantum mechanics, continuum and cosmological models provide evidence that no general approach is in sight. Of immediate interest is perhaps the establishment of greater precision in terminology so as to better communicate results involving multiscale physical events.
Resumo:
Liquid crystal on silicon (LCOS) is one of the most exciting technologies, combining the optical modulation characteristics of liquid crystals with the power and compactness of a silicon backplane. The objective of our work is to improve cell assembly and inspection methods by introducing new equipment for automated assembly and by using an optical inspection microscope. A Suss-Micro'Tec Universal device bonder is used for precision assembly and device packaging and an Olympus BX51 high resolution microscope is employed for device inspection. ©2009 Optical Society of America.
Resumo:
A novel slope delay model for CMOS switch-level timing verification is presented. It differs from conventional methods in being semianalytic in character. The model assumes that all input waveforms are trapezoidal in overall shape, but that they vary in their slope. This simplification is quite reasonable and does not seriously affect precision, but it facilitates rapid solution. The model divides the stages in a switch-level circuit into two types. One corresponds to the logic gates, and the other corresponds to logic gates with pass transistors connected to their outputs. Semianalytic modeling for both cases is discussed.
Resumo:
A pin on cylinder wear rig has been built with precision stepping motor drives to both rotary and axial motions which enable accurate positional control to be achieved. Initial experiments using sapphire indenters running against copper substrates have investigated the build up of a single wear groove by repeated sliding along the same track. An approximate three dimensional ploughing analysis is also presented and the results of theory and experiment compared.
Resumo:
In the present paper, the crack identification problems are investigated. This kind of problems belong to the scope of inverse problems and are usually ill-posed on their solutions. The paper includes two parts: (1) Based on the dynamic BIEM and the optimization method and using the measured dynamic information on outer boundary, the identification of crack in a finite domain is investigated and a method for choosing the high sensitive frequency region is proposed successfully to improve the precision. (2) Based on 3-D static BIEM and hypersingular integral equation theory, the penny crack identification in a finite body is reduced to an optimization problem. The investigation gives us some initial understanding on the 3-D inverse problems.
Resumo:
Based on the scaling criteria of polymer flooding reservoir obtained in our previous work in which the gravity and capillary forces, compressibility, non-Newtonian behavior, absorption, dispersion, and diffusion are considered, eight partial similarity models are designed. A new numerical approach of sensitivity analysis is suggested to quantify the dominance degree of relaxed dimensionless parameters for partial similarity model. The sensitivity factor quantifying the dominance degree of relaxed dimensionless parameter is defined. By solving the dimensionless governing equations including all dimensionless parameters, the sensitivity factor of each relaxed dimensionless parameter is calculated for each partial similarity model; thus, the dominance degree of the relaxed one is quantitatively determined. Based on the sensitivity analysis, the effect coefficient of partial similarity model is defined as the summation of product of sensitivity factor of relaxed dimensionless parameter and its relative relaxation quantity. The effect coefficient is used as a criterion to evaluate each partial similarity model. Then the partial similarity model with the smallest effect coefficient can be singled out to approximate to the prototype. Results show that the precision of partial similarity model is not only determined by the number of satisfied dimensionless parameters but also the relative relaxation quantity of the relaxed ones.
Resumo:
We design a particle interpretation of Feynman-Kac measures on path spaces based on a backward Markovian representation combined with a traditional mean field particle interpretation of the flow of their final time marginals. In contrast to traditional genealogical tree based models, these new particle algorithms can be used to compute normalized additive functionals "on-the-fly" as well as their limiting occupation measures with a given precision degree that does not depend on the final time horizon. We provide uniform convergence results with respect to the time horizon parameter as well as functional central limit theorems and exponential concentration estimates. Our results have important consequences for online parameter estimation for non-linear non-Gaussian state-space models. We show how the forward filtering backward smoothing estimates of additive functionals can be computed using a forward only recursion.
Resumo:
Resumen: La presente colaboración pretende valorar la ‘historicidad’ de los textos del Nuevo Testamento, de aquellos que tradicionalmente se ha considerado que presentan una ‘envoltura’ ambiental más fidedigna. Tal tarea no puede llevarse a cabo sin un previo discernimiento de las diferencias que sus distintos textos presentan, sin explicar los ambientes distintos en los que cada tradición se forjo, ni la intencionalidad de los distintos géneros a los que se recurrió. La obra de Lucas, constituida por la suma del tercer evangelio sinóptico y de los llamados Hechos de los apóstoles, presenta la percepción más evidentemente diacrónica, desde el nacimiento de Jesús hasta la instalación del cristianismo en Roma, y suma casi un tercio del texto neotestamentario, bastante más si tenemos en cuenta que para su comprensión es necesario el cotejo con los otros evangelios sinópticos y con las cartas paulinas. La perspectiva desde la cual se enfrenta este estudio es la del historiador, no la de la exégesis, la obra de Lucas se analiza como si se tratase de un texto más de la tradición helenística. Un texto que ha de responder, por lo tanto, a unos cánones literarios comprensibles a sus hipotéticos lectores, un texto construido en los años de máximo esplendor del Imperio romano, muy probablemente a finales del siglo I, en un contexto geográfico y cultural de momento impreciso pero que ha de tener en cuenta los problemas palestinos posteriores a la guerra judía de los años 67-70 y el entorno de pugna religiosa y creatividad teológica que, necesariamente, habría de caracterizar a una religión nueva, aún en proceso de formación y que estaba perfilando y perfeccionando sus definitivas señas de identidad. En este sentido se ha de valorar la personalidad del autor y su nivel de compromiso con el grupo religioso del cual pretende presentar una semblanza; por supuesto, es necesario descifrar la intencionalidad del texto, mediatizada por el género y por el público al cual pretende llegar. Debemos insertar la información particular que Lucas-Hechos aporta dentro de un contexto y, cuando sea posible, corroborar su información recurriendo a otras fuentes contemporáneas. A partir de ese proceso podremos concluir si la información aportada es verídica o no, si tal nivel de precisión es imposible podremos al menos pronunciarnos sobre si es creíble o si, por el contrario, es un mero artificio.
Resumo:
Resumen: El presente trabajo se propuso analizar si el tiempo que los niños tardan en recodificar una palabra incide en la posibilidad de almacenar la forma ortográfica de la misma. Para ello, 40 niños que cursaban tercer grado de la escuela primaria recodificaron fonológicamente pseudopalabras con ortografía compleja. Se midió la precisión y el tiempo de lectura. Tres días después de las sesiones de lectura, los sujetos realizaban una prueba de dictado y de decisión léxica que incluían las pseudopalabras leídas. Los resultados señalaron que las medidas de aprendizaje ortográfico se relacionaron con el tiempo de lectura pero no con la precisión en la recodificación. Este dato parecería sugerir que, en ortografías transparentes como el español, el tiempo en la recodificación impacta más que la precisión en la formación de representaciones ortográficas.
Resumo:
Semi-weight function method is developed to solve the plane problem of two bonded dissimilar materials containing a crack along the bond. From equilibrium equation, stress and strain relationship, conditions of continuity across interface and free crack surface, the stress and displacement fields were obtained. The eigenvalue of these fields is lambda. Semi-weight functions were obtained as virtual displacement and stress fields with eigenvalue-lambda. Integral expression of fracture parameters, K-I and K-II, were obtained from reciprocal work theorem with semi-weight functions and approximate displacement and stress values on any integral path around crack tip. The calculation results of applications show that the semi-weight function method is a simple, convenient and high precision calculation method.