979 resultados para Time-shift estimation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A novel optical method is proposed and demonstrated, for real-time dimension estimation of thin opaque cylindrical objects. The methodology relies on free-space Fraunhofer diffraction principle. The central region, of such tailored diffraction pattern obtained under suitable choice of illumination conditions, comprises of a pair of `equal intensity maxima', whose separation remains constant and independent of the diameter of the diffracting object. An analysis of `the intensity distribution in this region' reveals the following. At a point symmetrically located between the said maxima, the light intensity varies characteristically with diameter of the diffracting object, exhibiting a relatively stronger intensity modulation under spherical wave illumination than under a plane wave illumination. The analysis reveals further, that the said intensity variation with diameter is controllable by the illumination conditions. Exploiting these `hitherto unexplored' features, the present communication reports for the first time, a reliable method of estimating diameter of thin opaque cylindrical objects in real-time, with nanometer resolution from single point intensity measurement. Based on the proposed methodology, results of few simulation and experimental investigations carried-out on metallic wires with diameters spanning the range of 5 to 50 mu m, are presented. The results show that proposed method is well-suited for high resolution on-line monitoring of ultrathin wire diameters, extensively used in micro-mechanics and semiconductor industries, where the conventional diffraction-based methods fail to produce accurate results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In exploration seismology, the geologic target of oil and gas reservoir in complex medium request the high accuracy image of the structure and lithology of the medium. So the study of the prestack image and the elastic inversion of seismic wave in the complex medium come to the leading edge. The seismic response measured at the surface carries two fundamental pieces of information: the propagation effects of the medium and the reflections from the different layer boundaries in the medium. The propagation represent the low-wavenumber component of the medium, it is so-called the trend or macro layering, whereas the reflections represent the high-wavenumber component of the medium, it is called the detailed or fine layering. The result of migration velocity analysis is the resolution of the low-wavenumber component of the medium, but the prestack elastic inversion provided the resolution of the high-wavvenumber component the medium. In the dissertation, the two aspects about the migration velocity estimation and the elastic inversion have been studied.Firstly, any migration velocity analysis methods must include two basic elements: the criterion that tell us how to know whether the model parameters are correct and the updating that tell us how to update the model parameters when they are incorrect, which are effected on the properties and efficiency of the velocity estimation method. In the dissertation, a migration velocity analysis method based on the CFP technology has been presented in which the strategy of the top-down layer stripping approach are adapted to avoid the difficult of the selecting reduce .The proposed method has a advantage that the travel time errors obtained from the DTS panel are defined directly in time which is the difference with the method based on common image gather in which the residual curvature measured in depth should be converted to travel time errors.In the proposed migration velocity analysis method, the four aspects have been improved as follow:? The new parameterization of velocity model is provided in which the boundaries of layers are interpolated with the cubic spline of the control location and the velocity with a layer may change along with lateral position but the value is calculated as a segmented linear function of the velocity of the lateral control points. The proposed parameterization is suitable to updating procedure.? The analytical formulas to represent the travel time errors and the model parameters updates in the t-p domain are derived under local lateral homogeneous. The velocity estimations are iteratively computed as parametric inversion. The zero differential time shift in the DTS panel for each layer show the convergence of the velocity estimation.? The method of building initial model using the priori information is provided to improve the efficiency of velocity analysis. In the proposed method, Picking interesting events in the stacked section to define the boundaries of the layers and the results of conventional velocity analysis are used to define the velocity value of the layers? An interactive integrate software environment with the migration velocity analysis and prestack migration is built.The proposed method is firstly used to the synthetic data. The results of velocity estimation show both properties and efficiency of the velocity estimation are very good.The proposed method is also used to the field data which is the marine data set. In this example, the prestack and poststack depth migration of the data are completed using the different velocity models built with different method. The comparison between them shows that the model from the proposed method is better and improves obviously the quality of migration.In terms of the theoretical method of expressing a multi-variable function by products of single-variable functions which is suggested by Song Jian (2001), the separable expression of one-way wave operator has been studied. A optimization approximation with separable expression of the one-way wave operator is presented which easily deal with the lateral change of velocity in space and wave number domain respectively and has good approach accuracy. A new prestack depth migration algorithm based on the optimization approximation separable expression is developed and used to testing the results of velocity estimation.Secondly, according to the theory of the seismic wave reflection and transmission, the change of the amplitude via the incident angle is related to the elasticity of medium in the subsurface two-side. In the conventional inversion with poststack datum, only the information of the reflection operator at the zero incident angles can be used. If the more robust resolutions are requested, the amplitudes of all incident angles should be used.A natural separable expression of the reflection/transmission operator is represented, which is the sum of the products of two group functions. One group function vary with phase space whereas other group function is related to elastic parameters of the medium and geological structure.By employing the natural separable expression of the reflection/transmission operator, the method of seismic wave modeling with the one-way wave equation is developed to model the primary reflected waves, it is adapt to a certain extent heterogeneous media and confirms the accuracy of AVA of the reflections when the incident angle is less than 45'. The computational efficiency of the scheme is greatly high.The natural separable expression of the reflection/transmission operator is also used to construct prestack elastic inversion algorithm. Being different from the AVO analysis and inversion in which the angle gathers formed during the prstack migration are used, the proposed algorithm construct a linear equations during the prestack migration by the separable expression of the reflection/transmission operator. The unknowns of the linear equations are related to the elasticity of the medium, so the resolutions of them provided the elastic information of the medium.The proposed method of inversion is the same as AVO inversion in , the difference between them is only the method processing the amplitude via the incident angle and computational domain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A conventional local model (LM) network consists of a set of affine local models blended together using appropriate weighting functions. Such networks have poor interpretability since the dynamics of the blended network are only weakly related to the underlying local models. In contrast, velocity-based LM networks employ strictly linear local models to provide a transparent framework for nonlinear modelling in which the global dynamics are a simple linear combination of the local model dynamics. A novel approach for constructing continuous-time velocity-based networks from plant data is presented. Key issues including continuous-time parameter estimation, correct realisation of the velocity-based local models and avoidance of the input derivative are all addressed. Application results are reported for the highly nonlinear simulated continuous stirred tank reactor process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article I will present two arguments. First, the argument that the time travel television series historically provided viewers with a spectacular temporal and spatial alternative to the routine of everyday life, the regulation of television scheduling, and the small-world confines of domestic subjectivity. Taking the decades of the 1970s and 1980s, predominantly in a UK viewing environment, I will suggest that the special effect rendering of the time travel sequence expanded the viewer’s material universe, and affectively wrenched the television set free from the strictures of scheduling and realist programming. Further, the time travel series readily and regularly took the domestic space, the ordinary day and the everyman/ person into awesome environments and situations that suggested alternative lifestyles and behaviours, with a different existential tempo and rhythm. At a narrative, thematic, meta- textual, and aesthetically spectacular level, television time travel saw to the wonderful end of the working day. Case studies include Sapphire and Steal, Dr Who, and Quantum Leap. Second, the article will argue that rather than the contemporary time travel television series being an extraordinary alternative to ordinary life, they instead articulate convergence culture, deregulation, multiple channel viewing, and time-shift culture where there is no such thing as an ordinary working day or domestic viewing context.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we present a hybrid method to track human motions in real-time. With simplified marker sets and monocular video input, the strength of both marker-based and marker-free motion capturing are utilized: A cumbersome marker calibration is avoided while the robustness of the marker-free tracking is enhanced by referencing the tracked marker positions. An improved inverse kinematics solver is employed for real-time pose estimation. A computer-visionbased approach is applied to refine the pose estimation and reduce the ambiguity of the inverse kinematics solutions. We use this hybrid method to capture typical table tennis upper body movements in a real-time virtual reality application.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The need for timely population data for health planning and Indicators of need has Increased the demand for population estimates. The data required to produce estimates is difficult to obtain and the process is time consuming. Estimation methods that require less effort and fewer data are needed. The structure preserving estimator (SPREE) is a promising technique not previously used to estimate county population characteristics. This study first uses traditional regression estimation techniques to produce estimates of county population totals. Then the structure preserving estimator, using the results produced in the first phase as constraints, is evaluated.^ Regression methods are among the most frequently used demographic methods for estimating populations. These methods use symptomatic indicators to predict population change. This research evaluates three regression methods to determine which will produce the best estimates based on the 1970 to 1980 indicators of population change. Strategies for stratifying data to improve the ability of the methods to predict change were tested. Difference-correlation using PMSA strata produced the equation which fit the data the best. Regression diagnostics were used to evaluate the residuals.^ The second phase of this study is to evaluate use of the structure preserving estimator in making estimates of population characteristics. The SPREE estimation approach uses existing data (the association structure) to establish the relationship between the variable of interest and the associated variable(s) at the county level. Marginals at the state level (the allocation structure) supply the current relationship between the variables. The full allocation structure model uses current estimates of county population totals to limit the magnitude of county estimates. The limited full allocation structure model has no constraints on county size. The 1970 county census age - gender population provides the association structure, the allocation structure is the 1980 state age - gender distribution.^ The full allocation model produces good estimates of the 1980 county age - gender populations. An unanticipated finding of this research is that the limited full allocation model produces estimates of county population totals that are superior to those produced by the regression methods. The full allocation model is used to produce estimates of 1986 county population characteristics. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compile-time cost estimation of logic programs has focused on the estimation of upper bounds on costs. However, in many applications, such as parallel implementations on distributed-memory machines, one would prefer to work with lower bounds instead. The problem with estimating lower bounds is that in general, it is necessary to account for the possibility of failure of head unification, leading to a trivial lower bound of 0. In this paper, we show how, given type and mode information about procedures in a logic program, it is possible to (semi-automatically) derive nontrivial lower bounds on their computational costs. We also discuss the cost analysis for the special and frequent case of divide-and-conquer programs and show how —as a pragmatic short-term solution —it may be possible to obtain useful results simply by identifying and treating divide-and-conquer programs specially.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El estudio del comportamiento de la atmósfera ha resultado de especial importancia tanto en el programa SESAR como en NextGen, en los que la gestión actual del tránsito aéreo (ATM) está experimentando una profunda transformación hacia nuevos paradigmas tanto en Europa como en los EE.UU., respectivamente, para el guiado y seguimiento de las aeronaves en la realización de rutas más eficientes y con mayor precisión. La incertidumbre es una característica fundamental de los fenómenos meteorológicos que se transfiere a la separación de las aeronaves, las trayectorias de vuelo libres de conflictos y a la planificación de vuelos. En este sentido, el viento es un factor clave en cuanto a la predicción de la futura posición de la aeronave, por lo que tener un conocimiento más profundo y preciso de campo de viento reducirá las incertidumbres del ATC. El objetivo de esta tesis es el desarrollo de una nueva técnica operativa y útil destinada a proporcionar de forma adecuada y directa el campo de viento atmosférico en tiempo real, basada en datos de a bordo de la aeronave, con el fin de mejorar la predicción de las trayectorias de las aeronaves. Para lograr este objetivo se ha realizado el siguiente trabajo. Se han descrito y analizado los diferentes sistemas de la aeronave que proporcionan las variables necesarias para obtener la velocidad del viento, así como de las capacidades que permiten la presentación de esta información para sus aplicaciones en la gestión del tráfico aéreo. Se ha explorado el uso de aeronaves como los sensores de viento en un área terminal para la estimación del viento en tiempo real con el fin de mejorar la predicción de las trayectorias de aeronaves. Se han desarrollado métodos computacionalmente eficientes para estimar las componentes horizontales de la velocidad del viento a partir de las velocidades de las aeronaves (VGS, VCAS/VTAS), la presión y datos de temperatura. Estos datos de viento se han utilizado para estimar el campo de viento en tiempo real utilizando un sistema de procesamiento de datos a través de un método de mínima varianza. Por último, se ha evaluado la exactitud de este procedimiento para que esta información sea útil para el control del tráfico aéreo. La información inicial proviene de una muestra de datos de Registradores de Datos de Vuelo (FDR) de aviones que aterrizaron en el aeropuerto Madrid-Barajas. Se dispuso de datos de ciertas aeronaves durante un periodo de más de tres meses que se emplearon para calcular el vector viento en cada punto del espacio aéreo. Se utilizó un modelo matemático basado en diferentes métodos de interpolación para obtener los vectores de viento en áreas sin datos disponibles. Se han utilizado tres escenarios concretos para validar dos métodos de interpolación: uno de dos dimensiones que trabaja con ambas componentes horizontales de forma independiente, y otro basado en el uso de una variable compleja que relaciona ambas componentes. Esos métodos se han probado en diferentes escenarios con resultados dispares. Esta metodología se ha aplicado en un prototipo de herramienta en MATLAB © para analizar automáticamente los datos de FDR y determinar el campo vectorial del viento que encuentra la aeronave al volar en el espacio aéreo en estudio. Finalmente se han obtenido las condiciones requeridas y la precisión de los resultados para este modelo. El método desarrollado podría utilizar los datos de los aviones comerciales como inputs utilizando los datos actualmente disponibles y la capacidad computacional, para proporcionárselos a los sistemas ATM donde se podría ejecutar el método propuesto. Estas velocidades del viento calculadas, o bien la velocidad respecto al suelo y la velocidad verdadera, se podrían difundir, por ejemplo, a través del sistema de direccionamiento e informe para comunicaciones de aeronaves (ACARS), mensajes de ADS-B o Modo S. Esta nueva fuente ayudaría a actualizar la información del viento suministrada en los productos aeronáuticos meteorológicos (PAM), informes meteorológicos de aeródromos (AIRMET), e información meteorológica significativa (SIGMET). ABSTRACT The study of the atmosphere behaviour is been of particular importance both in SESAR and NextGen programs, where the current air traffic management (ATM) system is undergoing a profound transformation to the new paradigms both in Europe and the USA, respectively, to guide and track aircraft more precisely on more efficient routes. Uncertainty is a fundamental characteristic of weather phenomena which is transferred to separation assurance, flight path de-confliction and flight planning applications. In this respect, the wind is a key factor regarding the prediction of the future position of the aircraft, so that having a deeper and accurate knowledge of wind field will reduce ATC uncertainties. The purpose of this thesis is to develop a new and operationally useful technique intended to provide adequate and direct real-time atmospheric winds fields based on on-board aircraft data, in order to improve aircraft trajectory prediction. In order to achieve this objective the following work has been accomplished. The different sources in the aircraft systems that provide the variables needed to derivate the wind velocity have been described and analysed, as well as the capabilities which allow presenting this information for air traffic management applications. The use of aircraft as wind sensors in a terminal area for real-time wind estimation in order to improve aircraft trajectory prediction has been explored. Computationally efficient methods have been developed to estimate horizontal wind components from aircraft velocities (VGS, VCAS/VTAS), pressure, and temperature data. These wind data were utilized to estimate a real-time wind field using a data processing approach through a minimum variance method. Finally, the accuracy of this procedure has been evaluated for this information to be useful to air traffic control. The initial information comes from a Flight Data Recorder (FDR) sample of aircraft landing in Madrid-Barajas Airport. Data available for more than three months were exploited in order to derive the wind vector field in each point of the airspace. Mathematical model based on different interpolation methods were used in order to obtain wind vectors in void areas. Three particular scenarios were employed to test two interpolation methods: a two-dimensional one that works with both horizontal components in an independent way, and also a complex variable formulation that links both components. Those methods were tested using various scenarios with dissimilar results. This methodology has been implemented in a prototype tool in MATLAB © in order to automatically analyse FDR and determine the wind vector field that aircraft encounter when flying in the studied airspace. Required conditions and accuracy of the results were derived for this model. The method developed could be fed by commercial aircraft utilizing their currently available data sources and computational capabilities, and providing them to ATM system where the proposed method could be run. Computed wind velocities, or ground and true airspeeds, would then be broadcasted, for example, via the Aircraft Communication Addressing and Reporting System (ACARS), ADS-B out messages, or Mode S. This new source would help updating the wind information furnished in meteorological aeronautical products (PAM), meteorological aerodrome reports (AIRMET), and significant meteorological information (SIGMET).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Colors of special-effect coatings have strong dependence on illumination/viewing geometry and an appealing appearance. An open question is to ask about the minimum number of measurement geometries required to completely characterize their observed color shift. A recently published principal components analysis (PCA)-based procedure to estimate the color of special-effect coatings at any geometry from measurements at a reduced set of geometries was tested in this work by using the measurement geometries of commercial portable multiangle spectrophotometers X-Rite MA98, Datacolor FX10, and BYK-mac as reduced sets. The performance of the proposed PCA procedure for the color-shift estimation for these commercial geometries has been examined for 15 special-effect coatings. Our results suggest that for rendering the color appearance of 3D objects covered with special-effect coatings, the color accuracy obtained with this procedure may be sufficient. This is the case especially if geometries of X-Rite MA98 or Datacolor FX10 are used.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a framework for performing real-time recursive estimation of landmarks’ visual appearance. Imaging data in its original high dimensional space is probabilistically mapped to a compressed low dimensional space through the definition of likelihood functions. The likelihoods are subsequently fused with prior information using a Bayesian update. This process produces a probabilistic estimate of the low dimensional representation of the landmark visual appearance. The overall filtering provides information complementary to the conventional position estimates which is used to enhance data association. In addition to robotics observations, the filter integrates human observations in the appearance estimates. The appearance tracks as computed by the filter allow landmark classification. The set of labels involved in the classification task is thought of as an observation space where human observations are made by selecting a label. The low dimensional appearance estimates returned by the filter allow for low cost communication in low bandwidth sensor networks. Deployment of the filter in such a network is demonstrated in an outdoor mapping application involving a human operator, a ground and an air vehicle.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Smart Card data from Automated Fare Collection system has been considered as a promising source of information for transit planning. However, literature has been limited to mining travel patterns from transit users and suggesting the potential of using this information. This paper proposes a method for mining spatial regular origins-destinations and temporal habitual travelling time from transit users. These travel regularity are discussed as being useful for transit planning. After reconstructing the travel itineraries, three levels of Density-Based Spatial Clustering of Application with Noise (DBSCAN) have been utilised to retrieve travel regularity of each of each frequent transit users. Analyses of passenger classifications and personal travel time variability estimation are performed as the examples of using travel regularity in transit planning. The methodology introduced in this paper is of interest for transit authorities in planning and managements

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing integration of Renewable Energy Resources (RER) and the role of Electric Energy Storage (EES) in distribution systems has created interest in using energy management strategies. EES has become a suitable resource to manage energy consumption and generation in smart grid. Optimize scheduling of EES can also maximize retailer’s profit by introducing energy time-shift opportunities. This paper proposes a new strategy for scheduling EES in order to reduce the impact of electricity market price and load uncertainty on retailers’ profit. The proposed strategy optimizes the cost of purchasing energy with the objective of minimizing surplus energy cost in hedging contract. A case study is provided to demonstrate the impact of the proposed strategy on retailers’ financial benefit.