906 resultados para state estimation
Resumo:
A discussion of nonlinear dynamics, demonstrated by the familiar automobile, is followed by the development of a systematic method of analysis of a possibly nonlinear time series using difference equations in the general state-space format. This format allows recursive state-dependent parameter estimation after each observation thereby revealing the dynamics inherent in the system in combination with random external perturbations.^ The one-step ahead prediction errors at each time period, transformed to have constant variance, and the estimated parametric sequences provide the information to (1) formally test whether time series observations y(,t) are some linear function of random errors (ELEM)(,s), for some t and s, or whether the series would more appropriately be described by a nonlinear model such as bilinear, exponential, threshold, etc., (2) formally test whether a statistically significant change has occurred in structure/level either historically or as it occurs, (3) forecast nonlinear system with a new and innovative (but very old numerical) technique utilizing rational functions to extrapolate individual parameters as smooth functions of time which are then combined to obtain the forecast of y and (4) suggest a measure of resilience, i.e. how much perturbation a structure/level can tolerate, whether internal or external to the system, and remain statistically unchanged. Although similar to one-step control, this provides a less rigid way to think about changes affecting social systems.^ Applications consisting of the analysis of some familiar and some simulated series demonstrate the procedure. Empirical results suggest that this state-space or modified augmented Kalman filter may provide interesting ways to identify particular kinds of nonlinearities as they occur in structural change via the state trajectory.^ A computational flow-chart detailing computations and software input and output is provided in the body of the text. IBM Advanced BASIC program listings to accomplish most of the analysis are provided in the appendix. ^
Resumo:
This paper examines empirically whether financial deepening has contributed to poverty reduction in India. Using unbalanced panel data for 28 states and union territories between 1973 and 2004, we estimate models in which the poverty ratio is explained by financial deepening, controlling for international openness, inflation rate, and economic growth. From the dynamic generalised method of moments (GMM) estimation, we find that financial deepening and economic growth alleviate poverty, while international openness and the inflation rate have the opposite effect. These results are robust to changes in the poverty ratios in rural areas, urban areas, and the whole economy.
Resumo:
In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi-Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles' state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle's state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle's state for more than one minute, at real-time frame rates based, only on visual information.
Resumo:
The estimation of modal parameters of a structure from ambient measurements has attracted the attention of many researchers in the last years. The procedure is now well established and the use of state space models, stochastic system identification methods and stabilization diagrams allows to identify the modes of the structure. In this paper the contribution of each identified mode to the measured vibration is discussed. This modal contribution is computed using the Kalman filter and it is an indicator of the importance of the modes. Also the variation of the modal contribution with the order of the model is studied. This analysis suggests selecting the order for the state space model as the order that includes the modes with higher contribution. The order obtained using this method is compared to those obtained using other well known methods, like Akaike criteria for time series or the singular values of the weighted projection matrix in the Stochastic Subspace Identification method. Finally, both simulated and measured vibration data are used to show the practicability of the derived technique. Finally, it is important to remark that the method can be used with any identification method working in the state space model.
Resumo:
An increasing number of neuroimaging studies are concerned with the identification of interactions or statistical dependencies between brain areas. Dependencies between the activities of different brain regions can be quantified with functional connectivity measures such as the cross-correlation coefficient. An important factor limiting the accuracy of such measures is the amount of empirical data available. For event-related protocols, the amount of data also affects the temporal resolution of the analysis. We use analytical expressions to calculate the amount of empirical data needed to establish whether a certain level of dependency is significant when the time series are autocorrelated, as is the case for biological signals. These analytical results are then contrasted with estimates from simulations based on real data recorded with magnetoencephalography during a resting-state paradigm and during the presentation of visual stimuli. Results indicate that, for broadband signals, 50–100 s of data is required to detect a true underlying cross-correlations coefficient of 0.05. This corresponds to a resolution of a few hundred milliseconds for typical event-related recordings. The required time window increases for narrow band signals as frequency decreases. For instance, approximately 3 times as much data is necessary for signals in the alpha band. Important implications can be derived for the design and interpretation of experiments to characterize weak interactions, which are potentially important for brain processing.
Resumo:
Computing the modal parameters of structural systems often requires processing data from multiple non-simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors, which are fixed for all measurements, while the other sensors change their position from one setup to the next. One possibility is to process the setups separately resulting in different modal parameter estimates for each setup. Then, the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global mode shapes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a new state space model that processes all setups at once. The result is that the global mode shapes are obtained automatically, and only a value for the natural frequency and damping ratio of each mode is estimated. We also investigate the estimation of this model using maximum likelihood and the Expectation Maximization algorithm, and apply this technique to simulated and measured data corresponding to different structures.
Resumo:
Radiative shock waves play a pivotal role in the transport energy into the stellar medium. This fact has led to many efforts to scale the astrophysical phenomena to accessible laboratory conditions and their study has been highlighted as an area requiring further experimental investigations. Low density material with high atomic mass is suitable to achieve radiative regime, and, therefore, low density xenon gas is commonly used for the medium in which the radiative shocks such as radiative blast waves propagate. In this work, by means of collisional-radiative steady-state calculations, a characterization and an analysis of microscopic magnitudes of laboratory blast waves launched in xenon clusters are made. Thus, for example, the average ionization, the charge state distribution, the cooling time or photon mean free paths are studied. Furthermore, for a particular experiment, the effects of the self-absorption and self-emission in the specific intensity emitted by the shock front and that is going through the radiative precursor are investigated. Finally, for that experiment, since the electron temperature is not measured experimentally, an estimation of this magnitude is made both for the shock shell and the radiative precursor.
Resumo:
Computing the modal parameters of large structures in Operational Modal Analysis often requires to process data from multiple non simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors that are fixed for all the measurements, while the other sensors are moved from one setup to the next. One possibility is to process the setups separately what result in different modal parameter estimates for each setup. Then the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global modes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a state space model that can be used to process all setups at once so the global mode shapes are obtained automatically and subsequently only a value for the natural frequency and damping ratio of each mode is computed. We also present how this model can be estimated using maximum likelihood and the Expectation Maximization algorithm. We apply this technique to real data measured at a footbridge.
Resumo:
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm that is applied to the estimation of modal parameters from system input and output data. The effectiveness of this structural identification method is evaluated through numerical simulation. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the simulated structure are estimated applying the proposed identification method to a set of 100 simulated cases. The numerical results show that the proposed method estimates the modal parameters with precision in the presence of 20% measurement noise even. Finally, advantages and disadvantages of the method have been discussed.
Resumo:
Whole brain resting state connectivity is a promising biomarker that might help to obtain an early diagnosis in many neurological diseases, such as dementia. Inferring resting-state connectivity is often based on correlations, which are sensitive to indirect connections, leading to an inaccurate representation of the real backbone of the network. The precision matrix is a better representation for whole brain connectivity, as it considers only direct connections. The network structure can be estimated using the graphical lasso (GL), which achieves sparsity through l1-regularization on the precision matrix. In this paper, we propose a structural connectivity adaptive version of the GL, where weaker anatomical connections are represented as stronger penalties on the corre- sponding functional connections. We applied beamformer source reconstruction to the resting state MEG record- ings of 81 subjects, where 29 were healthy controls, 22 were single-domain amnestic Mild Cognitive Impaired (MCI), and 30 were multiple-domain amnestic MCI. An atlas-based anatomical parcellation of 66 regions was ob- tained for each subject, and time series were assigned to each of the regions. The fiber densities between the re- gions, obtained with deterministic tractography from diffusion-weighted MRI, were used to define the anatomical connectivity. Precision matrices were obtained with the region specific time series in five different frequency bands. We compared our method with the traditional GL and a functional adaptive version of the GL, in terms of log-likelihood and classification accuracies between the three groups. We conclude that introduc- ing an anatomical prior improves the expressivity of the model and, in most cases, leads to a better classification between groups.
Resumo:
A medida que se incrementa la energía de los aceleradores de partículas o iones pesados como el CERN o GSI, de los reactores de fusión como JET o ITER, u otros experimentos científicos, se va haciendo cada vez más imprescindible el uso de técnicas de manipulación remota para la interacción con el entorno sujeto a la radiación. Hasta ahora la tasa de dosis radioactiva en el CERN podía tomar valores cercanos a algunos mSv para tiempos de enfriamiento de horas, que permitían la intervención humana para tareas de mantenimiento. Durante los primeros ensayos con plasma en JET, se alcanzaban valores cercanos a los 200 μSv después de un tiempo de enfriamiento de 4 meses y ya se hacía extensivo el uso de técnicas de manipulación remota. Hay una clara tendencia al incremento de los niveles de radioactividad en el futuro en este tipo de instalaciones. Un claro ejemplo es ITER, donde se esperan valores de 450 Sv/h en el centro del toroide a los 11 días de enfriamiento o los nuevos niveles energéticos del CERN que harán necesario una apuesta por niveles de mantenimiento remotos. En estas circunstancias se enmarca esta tesis, que estudia un sistema de control bilateral basado en fuerza-posición, tratando de evitar el uso de sensores de fuerza/par, cuyo contenido electrónico los hace especialmente sensitivos en estos ambientes. El contenido de este trabajo se centra en la teleoperación de robots industriales, que debido a su reconocida solvencia y facilidad para ser adaptados a estos entornos, unido al bajo coste y alta disponibilidad, les convierte en una alternativa interesante para tareas de manipulación remota frente a costosas soluciones a medida. En primer lugar se considera el problema cinemático de teleoperación maestro-esclavo de cinemática disimilar y se desarrolla un método general para la solución del problema en el que se incluye el uso de fuerzas asistivas para guiar al operador. A continuación se explican con detalle los experimentos realizados con un robot ABB y que muestran las dificultades encontradas y recomendaciones para solventarlas. Se concluye el estudio cinemático con un método para el encaje de espacios de trabajo entre maestro y esclavo disimilares. Posteriormente se mira hacia la dinámica, estudiándose el modelado de robots con vistas a obtener un método que permita estimar las fuerzas externas que actúan sobre los mismos. Durante la caracterización del modelo dinámico, se realizan varios ensayos para tratar de encontrar un compromiso entre complejidad de cálculo y error de estimación. También se dan las claves para modelar y caracterizar robots con estructura en forma de paralelogramo y se presenta la arquitectura de control deseada. Una vez obtenido el modelo completo del esclavo, se investigan diferentes alternativas que permitan una estimación de fuerzas externas en tiempo real, minimizando las derivadas de la posición para minimizar el ruido. Se comienza utilizando observadores clásicos del estado para ir evolucionando hasta llegar al desarrollo de un observador de tipo Luenberger-Sliding cuya implementación es relativamente sencilla y sus resultados contundentes. También se analiza el uso del observador propuesto durante un control bilateral simulado en el que se compara la realimentación de fuerzas obtenida con las técnicas clásicas basadas en error de posición frente a un control basado en fuerza-posición donde la fuerza es estimada y no medida. Se comprueba como la solución propuesta da resultados comparables con las arquitecturas clásicas y sin embargo introduce una alternativa para la teleoperación de robots industriales cuya teleoperación en entornos radioactivos sería imposible de otra manera. Finalmente se analizan los problemas derivados de la aplicación práctica de la teleoperación en los escenarios mencionados anteriormente. Debido a las condiciones prohibitivas para todo equipo electrónico, los sistemas de control se deben colocar a gran distancia de los manipuladores, dando lugar a longitudes de cable de centenares de metros. En estas condiciones se crean sobretensiones en controladores basados en PWM que pueden ser destructivas para el sistema formado por control, cableado y actuador, y por tanto, han de ser eliminadas. En este trabajo se propone una solución basada en un filtro LC comercial y se prueba de forma extensiva que su inclusión no produce efectos negativos sobre el control del actuador. ABSTRACT As the energy on the particle accelerators or heavy ion accelerators such as CERN or GSI, fusion reactors such as JET or ITER, or other scientific experiments is increased, it is becoming increasingly necessary to use remote handling techniques to interact with the remote and radioactive environment. So far, the dose rate at CERN could present values near several mSv for cooling times on the range of hours, which allowed human intervention for maintenance tasks. At JET, they measured values close to 200 μSv after a cooling time of 4 months and since then, the remote handling techniques became usual. There is a clear tendency to increase the radiation levels in the future. A clear example is ITER, where values of 450 Sv/h are expected in the centre of the torus after 11 days of cooling. Also, the new energetic levels of CERN are expected to lead to a more advanced remote handling means. In these circumstances this thesis is framed, studying a bilateral control system based on force-position, trying to avoid the use of force/torque sensors, whose electronic content makes them very sensitive in these environments. The contents of this work are focused on teleoperating industrial robots, which due its well-known reliability, easiness to be adapted to these environments, cost-effectiveness and high availability, are considered as an interesting alternative to expensive custom-made solutions for remote handling tasks. Firstly, the kinematic problem of teloperating master and slave with dissimilar kinematics is analysed and a new general approach for solving this issue is presented. The solution includes using assistive forces in order to guide the human operator. Coming up next, I explain with detail the experiments accomplished with an ABB robot that show the difficulties encountered and the proposed solutions. This section is concluded with a method to match the master’s and slave’s workspaces when they present dissimilar kinematics. Later on, the research studies the dynamics, with special focus on robot modelling with the purpose of obtaining a method that allows to estimate external forces acting on them. During the characterisation of the model’s parameters, a set of tests are performed in order to get to a compromise between computational complexity and estimation error. Key points for modelling and characterising robots with a parallelogram structure are also given, and the desired control architecture is presented. Once a complete model of the slave is obtained, different alternatives for external force estimation are review to be able to predict forces in real time, minimizing the position differentiation to minimize the estimation noise. The research starts by implementing classic state observers and then it evolves towards the use of Luenberger- Sliding observers whose implementation is relatively easy and the results are convincing. I also analyse the use of proposed observer during a simulated bilateral control on which the force feedback obtained with the classic techniques based on the position error is compared versus a control architecture based on force-position, where the force is estimated instead of measured. I t is checked how the proposed solution gives results comparable with the classical techniques and however introduces an alternative method for teleoperating industrial robots whose teleoperation in radioactive environments would have been impossible in a different way. Finally, the problems originated by the practical application of teleoperation in the before mentioned scenarios are analysed. Due the prohibitive conditions for every electronic equipment, the control systems should be placed far from the manipulators. This provokes that the power cables that fed the slaves devices can present lengths of hundreds of meters. In these circumstances, overvoltage waves are developed when implementing drives based on PWM technique. The occurrence of overvoltage is very dangerous for the system composed by drive, wiring and actuator, and has to be eliminated. During this work, a solution based on commercial LC filters is proposed and it is extensively proved that its inclusion does not introduce adverse effects into the actuator’s control.
Resumo:
Esta tesis doctoral presenta un procedimiento integral de control de calidad en centrales fotovoltaicas, que comprende desde la fase inicial de estimación de las expectativas de producción hasta la vigilancia del funcionamiento de la instalación una vez en operación, y que permite reducir la incertidumbre asociada su comportamiento y aumentar su fiabilidad a largo plazo, optimizando su funcionamiento. La coyuntura de la tecnología fotovoltaica ha evolucionado enormemente en los últimos años, haciendo que las centrales fotovoltaicas sean capaces de producir energía a unos precios totalmente competitivos en relación con otras fuentes de energía. Esto hace que aumente la exigencia sobre el funcionamiento y la fiabilidad de estas instalaciones. Para cumplir con dicha exigencia, es necesaria la adecuación de los procedimientos de control de calidad aplicados, así como el desarrollo de nuevos métodos que deriven en un conocimiento más completo del estado de las centrales, y que permitan mantener la vigilancia sobre las mismas a lo largo del tiempo. Además, los ajustados márgenes de explotación actuales requieren que durante la fase de diseño se disponga de métodos de estimación de la producción que comporten la menor incertidumbre posible. La propuesta de control de calidad presentada en este trabajo parte de protocolos anteriores orientados a la fase de puesta en marcha de una instalación fotovoltaica, y las complementa con métodos aplicables a la fase de operación, prestando especial atención a los principales problemas que aparecen en las centrales a lo largo de su vida útil (puntos calientes, impacto de la suciedad, envejecimiento…). Además, incorpora un protocolo de vigilancia y análisis del funcionamiento de las instalaciones a partir de sus datos de monitorización, que incluye desde la comprobación de la validez de los propios datos registrados hasta la detección y el diagnóstico de fallos, y que permite un conocimiento automatizado y detallado de las plantas. Dicho procedimiento está orientado a facilitar las tareas de operación y mantenimiento, de manera que se garantice una alta disponibilidad de funcionamiento de la instalación. De vuelta a la fase inicial de cálculo de las expectativas de producción, se utilizan los datos registrados en las centrales para llevar a cabo una mejora de los métodos de estimación de la radiación, que es la componente que más incertidumbre añade al proceso de modelado. El desarrollo y la aplicación de este procedimiento de control de calidad se han llevado a cabo en 39 grandes centrales fotovoltaicas, que totalizan una potencia de 250 MW, distribuidas por varios países de Europa y América Latina. ABSTRACT This thesis presents a comprehensive quality control procedure to be applied in photovoltaic plants, which covers from the initial phase of energy production estimation to the monitoring of the installation performance, once it is in operation. This protocol allows reducing the uncertainty associated to the photovoltaic plants behaviour and increases their long term reliability, therefore optimizing their performance. The situation of photovoltaic technology has drastically evolved in recent years, making photovoltaic plants capable of producing energy at fully competitive prices, in relation to other energy sources. This fact increases the requirements on the performance and reliability of these facilities. To meet this demand, it is necessary to adapt the quality control procedures and to develop new methods able to provide a more complete knowledge of the state of health of the plants, and able to maintain surveillance on them over time. In addition, the current meagre margins in which these installations operate require procedures capable of estimating energy production with the lower possible uncertainty during the design phase. The quality control procedure presented in this work starts from previous protocols oriented to the commissioning phase of a photovoltaic system, and complete them with procedures for the operation phase, paying particular attention to the major problems that arise in photovoltaic plants during their lifetime (hot spots, dust impact, ageing...). It also incorporates a protocol to control and analyse the installation performance directly from its monitoring data, which comprises from checking the validity of the recorded data itself to the detection and diagnosis of failures, and which allows an automated and detailed knowledge of the PV plant performance that can be oriented to facilitate the operation and maintenance of the installation, so as to ensure a high operation availability of the system. Back to the initial stage of calculating production expectations, the data recorded in the photovoltaic plants is used to improved methods for estimating the incident irradiation, which is the component that adds more uncertainty to the modelling process. The development and implementation of the presented quality control procedure has been carried out in 39 large photovoltaic plants, with a total power of 250 MW, located in different European and Latin-American countries.
Resumo:
In this work is addressed the topic of estimation of velocity and acceleration from digital position data. It is presented a review of several classic methods and implemented with real position data from a low cost digital sensor of a hydraulic linear actuator. The results are analyzed and compared. It is shown that static methods have a limited bandwidth application, and that the performance of some methods may be enhanced by adapting its parameters according to the current state.
Resumo:
Colorado State Parks is currently managing overstocked wildland fuels on critical Parklands through mechanical mastication projects, but wants to determine the feasibility of alternatively turning thinned materials into marketable biomass fuel. Accurate estimates of potential volume and associated production costs are critical to determining this feasibility. In this study, forest inventory data were used to estimate potential volume and combined with data from previous studies to estimate potential harvest and haul costs and revenues. This study reveals that a biomass utilization alternative may be an economically feasible option for State Parks to pursue and in many cases may be more economically favorable than current mechanical treatments.
Resumo:
In this letter, a new approach for crop phenology estimation with remote sensing is presented. The proposed methodology is aimed to exploit tools from a dynamical system context. From a temporal sequence of images, a geometrical model is derived, which allows us to translate this temporal domain into the estimation problem. The evolution model in state space is obtained through dimensional reduction by a principal component analysis, defining the state variables, of the observations. Then, estimation is achieved by combining the generated model with actual samples in an optimal way using a Kalman filter. As a proof of concept, an example with results obtained with this approach over rice fields by exploiting stacks of TerraSAR-X dual polarization images is shown.