677 resultados para Kalman, Filmagem de
Resumo:
Dissertação (mestrado)— Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.
Resumo:
A new type of space debris was recently discovered by Schildknecht in near -geosynchronous orbit (GEO). These objects were later identified as exhibiting properties associated with High Area-to-Mass ratio (HAMR) objects. According to their brightness magnitudes (light curve), high rotation rates and composition properties (albedo, amount of specular and diffuse reflection, colour, etc), it is thought that these objects are multilayer insulation (MLI). Observations have shown that this debris type is very sensitive to environmental disturbances, particularly solar radiation pressure, due to the fact that their shapes are easily deformed leading to changes in the Area-to-Mass ratio (AMR) over time. This thesis proposes a simple effective flexible model of the thin, deformable membrane with two different methods. Firstly, this debris is modelled with Finite Element Analysis (FEA) by using Bernoulli-Euler theory called “Bernoulli model”. The Bernoulli model is constructed with beam elements consisting 2 nodes and each node has six degrees of freedom (DoF). The mass of membrane is distributed in beam elements. Secondly, the debris based on multibody dynamics theory call “Multibody model” is modelled as a series of lump masses, connected through flexible joints, representing the flexibility of the membrane itself. The mass of the membrane, albeit low, is taken into account with lump masses in the joints. The dynamic equations for the masses, including the constraints defined by the connecting rigid rod, are derived using fundamental Newtonian mechanics. The physical properties of both flexible models required by the models (membrane density, reflectivity, composition, etc.), are assumed to be those of multilayer insulation. Both flexible membrane models are then propagated together with classical orbital and attitude equations of motion near GEO region to predict the orbital evolution under the perturbations of solar radiation pressure, Earth’s gravity field, luni-solar gravitational fields and self-shadowing effect. These results are then compared to two rigid body models (cannonball and flat rigid plate). In this investigation, when comparing with a rigid model, the evolutions of orbital elements of the flexible models indicate the difference of inclination and secular eccentricity evolutions, rapid irregular attitude motion and unstable cross-section area due to a deformation over time. Then, the Monte Carlo simulations by varying initial attitude dynamics and deformed angle are investigated and compared with rigid models over 100 days. As the results of the simulations, the different initial conditions provide unique orbital motions, which is significantly different in term of orbital motions of both rigid models. Furthermore, this thesis presents a methodology to determine the material dynamic properties of thin membranes and validates the deformation of the multibody model with real MLI materials. Experiments are performed in a high vacuum chamber (10-4 mbar) replicating space environment. A thin membrane is hinged at one end but free at the other. The free motion experiment, the first experiment, is a free vibration test to determine the damping coefficient and natural frequency of the thin membrane. In this test, the membrane is allowed to fall freely in the chamber with the motion tracked and captured through high velocity video frames. A Kalman filter technique is implemented in the tracking algorithm to reduce noise and increase the tracking accuracy of the oscillating motion. The forced motion experiment, the last test, is performed to determine the deformation characteristics of the object. A high power spotlight (500-2000W) is used to illuminate the MLI and the displacements are measured by means of a high resolution laser sensor. Finite Element Analysis (FEA) and multibody dynamics of the experimental setups are used for the validation of the flexible model by comparing with the experimental results of displacements and natural frequencies.
Resumo:
Three sediment records of sea surface temperature (SST) are analyzed that originate from distant locations in the North Atlantic, have centennial-to-multicentennial resolution, are based on the same reconstruction method and chronological assumptions, and span the past 15 000 yr. Using recursive least squares techniques, an estimate of the time-dependent North Atlantic SST field over the last 15 kyr is sought that is consistent with both the SST records and a surface ocean circulation model, given estimates of their respective error (co)variances. Under the authors' assumptions about data and model errors, it is found that the 10 degrees C mixed layer isotherm, which approximately traces the modern Subpolar Front, would have moved by ~15 degrees of latitude southward (northward) in the eastern North Atlantic at the onset (termination) of the Younger Dryas cold interval (YD), a result significant at the level of two standard deviations in the isotherm position. In contrast, meridional movements of the isotherm in the Newfoundland basin are estimated to be small and not significant. Thus, the isotherm would have pivoted twice around a region southeast of the Grand Banks, with a southwest-northeast orientation during the warm intervals of the Bolling-Allerod and the Holocene and a more zonal orientation and southerly position during the cold interval of the YD. This study provides an assessment of the significance of similar previous inferences and illustrates the potential of recursive least squares in paleoceanography.
Resumo:
In this thesis, we propose several advances in the numerical and computational algorithms that are used to determine tomographic estimates of physical parameters in the solar corona. We focus on methods for both global dynamic estimation of the coronal electron density and estimation of local transient phenomena, such as coronal mass ejections, from empirical observations acquired by instruments onboard the STEREO spacecraft. We present a first look at tomographic reconstructions of the solar corona from multiple points-of-view, which motivates the developments in this thesis. In particular, we propose a method for linear equality constrained state estimation that leads toward more physical global dynamic solar tomography estimates. We also present a formulation of the local static estimation problem, i.e., the tomographic estimation of local events and structures like coronal mass ejections, that couples the tomographic imaging problem to a phase field based level set method. This formulation will render feasible the 3D tomography of coronal mass ejections from limited observations. Finally, we develop a scalable algorithm for ray tracing dense meshes, which allows efficient computation of many of the tomographic projection matrices needed for the applications in this thesis.
Resumo:
The Theoretical and Experimental Tomography in the Sea Experiment (THETIS 1) took place in the Gulf of Lion to observe the evolution of the temperature field and the process of deep convection during the 1991-1992 winter. The temperature measurements consist, of moored sensors, conductivity-temperature-depth and expendable bathythermograph surveys, ana acoustic tomography. Because of this diverse data set and since the field evolves rather fast, the analysis uses a unified framework, based on estimation theory and implementing a Kalman filter. The resolution and the errors associated with the model are systematically estimated. Temperature is a good tracer of water masses. The time-evolving three-dimensional view of the field resulting from the analysis shows the details of the three classical convection phases: preconditioning, vigourous convection, and relaxation. In all phases, there is strong spatial nonuniformity, with mesoscale activity, short timescales, and sporadic evidence of advective events (surface capping, intrusions of Levantine Intermediate Water (LIW)). Deep convection, reaching 1500 m, was observed in late February; by late April the field had not yet returned to its initial conditions (strong deficit of LIW). Comparison with available atmospheric flux data shows that advection acts to delay the occurence of convection and confirms the essential role of buoyancy fluxes. For this winter, the deep. mixing results in an injection of anomalously warm water (Delta T similar or equal to 0.03 degrees) to a depth of 1500 m, compatible with the deep warming previously reported.
Resumo:
The goal of this study is to provide a framework for future researchers to understand and use the FARSITE wildfire-forecasting model with data assimilation. Current wildfire models lack the ability to provide accurate prediction of fire front position faster than real-time. When FARSITE is coupled with a recursive ensemble filter, the data assimilation forecast method improves. The scope includes an explanation of the standalone FARSITE application, technical details on FARSITE integration with a parallel program coupler called OpenPALM, and a model demonstration of the FARSITE-Ensemble Kalman Filter software using the FireFlux I experiment by Craig Clements. The results show that the fire front forecast is improved with the proposed data-driven methodology than with the standalone FARSITE model.
Resumo:
The efficiency of inhibition to corrosion of steel AISI 1018 of surfactant coconut oil saponified (SCO) and heterocyclic type mesoionics (1,3,4-triazólio-2-tiolato) in systems microemulsionados (SCO-ME and SCO-ME-MI) Of type O/A (rich in water emulsion) region with the work of Winsor IV. The systems microemulsionados (SCO-ME and SCO-ME-MI) were evaluated with a corrosion inhibitor for use in saline 10,000 ppm of chloride enriched with carbon dioxide (CO2). The assessment of corrosion inhibitors were evaluated by the techniques of linear polarization resistance (LPR) and loss of weight (MW) in a cell instrumented given the gravity and electrochemical devices. The systems were shooting speed of less than 60 minutes and efficiency of inhibition [SCO-ME (91.25%) and SCO-ME-MI (98.54%)]
Resumo:
A temática de discussão neste trabalho é o Ensino de Ciências através da Investigação (ENCI) [inquiry-based science education (IBSE)] que utilisa das Tecnologias de Informação e Comunicação (TIC) para o ensino de Ciências (TICEC). Deste modo, esta pesquisa tem o objetivo de identificar as conceções sobre a Natureza da Ciência e Tecnologia, além de compreender as interações, relações e discussões de crianças e jovens ao realizarem atividades de investigação através das TIC, dentro de um contexto de ensino de Ciências ativo e colaborativo. Organizamos esta investigação em três partes com diferentes estudos para alcançar este objetivo. A primeira parte apresenta estudos, a partir de uma revisão sistemática, de como se ensina e aprende Ciências utilizando as TICEC. Realizamos ainda uma revisão sobre o papel da argumentação no ensino de Ciências e elaboramos um instrumento para analizar a argumentação dos participantes numa atividade investigativa baseada em TICs. A segunda parte deste trabalho caracteriza a metodologia utilizada e os elementos da investigação: aspetos metodológicos (pesquisa qualitativa e estudo de caso), cenário (espaço não-formal de ensino de ciências ativo e colaborativo), sujeitos (crianças e jovens provenientes de um meio económico vulnerável), instrumentos de coleta de dados (questionário, filmagem de entrevistas, focus group e observações) e análise de dados (sistema de categorias). Também é apresentada a elaboração de uma atividade de investigação de ensino de Ciências (AIEC) mediada por recursos digitais, denominada de Módulo Temático Virtual (MTV). A terceira parte caracteriza-se pela apresentação dos resultados. No primeiro momento identificamos as conceções dos participantes sobre a Natureza da Ciência (Nature of Science – NOS), Natureza da Tecnologia (Nature of Technology - NOT) e o papel dos cientistas na sociedade. O último estudo vem caracterizar as interações, relações e discussões desenvolvidas pelos participantes quando realizam AIEC através das TICEC. Verificamos que dentro de um contexto ativo de ensino de Ciências e mediado pelas TIC, os participantes tendem a manifestar quatro aspetos ou domínios: social, técnico, afetivo e cognitivo. Os quatros domínios referidos caracterizam os processos que surgem durante o desenvolvimento de atividades científicas num cenário específico complementar à educação tradicional e são indicadores de como planear e analisar o ensino de Ciências quando se utiliza recurso digitais.
Resumo:
Doutoramento em Economia.
Resumo:
Se presenta una aproximación a la estructura a plazos de tasas de interés del mercado colombiano a través de dos modelos: Modelo de Diebold, & Li con factores latentes y el modelo de Diebold, Rudebusch & Aruoba con Macrofactores, los cuales fueron estimados utilizando un Filtro de Kalman implementado en MATLAB y posteriormente utilizados para obtener pronósticos de la curva en función del comportamiento esperado de variables macroeconómicas y financieras de la economía local y americana -- La inclusión de los macrofactores se hace esperando mejores proyecciones de la curva, de manera que tener proyecciones de estas variables será de utilidad para conocer el comportamiento futuro de la curva de rendimientos local -- Los modelos se ajustan con datos mensuales, tomando el periodo 2003-2015 y testeado con una porción de esta información; el modelo de factores latentes tiene solo información histórica de la curva cero cupón mientras que en el modelo con macrofactores se consideraron variables como: inflación local 12 meses, CDS 5Y, índice VIX, precios del WTI, TRM, tasa de cambio Euro/Dólar, tasa REPO y tasa FED; obteniendo finalmente dos modelos, siendo el que contiene macrofactores el que tiene mejores indicadores de desempeño en el pronóstico
Resumo:
Estimating un-measurable states is an important component for onboard diagnostics (OBD) and control strategy development in diesel exhaust aftertreatment systems. This research focuses on the development of an Extended Kalman Filter (EKF) based state estimator for two of the main components in a diesel engine aftertreatment system: the Diesel Oxidation Catalyst (DOC) and the Selective Catalytic Reduction (SCR) catalyst. One of the key areas of interest is the performance of these estimators when the catalyzed particulate filter (CPF) is being actively regenerated. In this study, model reduction techniques were developed and used to develop reduced order models from the 1D models used to simulate the DOC and SCR. As a result of order reduction, the number of states in the estimator is reduced from 12 to 1 per element for the DOC and 12 to 2 per element for the SCR. The reduced order models were simulated on the experimental data and compared to the high fidelity model and the experimental data. The results show that the effect of eliminating the heat transfer and mass transfer coefficients are not significant on the performance of the reduced order models. This is shown by an insignificant change in the kinetic parameters between the reduced order and 1D model for simulating the experimental data. An EKF based estimator to estimate the internal states of the DOC and SCR was developed. The DOC and SCR estimators were simulated on the experimental data to show that the estimator provides improved estimation of states compared to a reduced order model. The results showed that using the temperature measurement at the DOC outlet improved the estimates of the CO , NO , NO2 and HC concentrations from the DOC. The SCR estimator was used to evaluate the effect of NH3 and NOX sensors on state estimation quality. Three sensor combinations of NOX sensor only, NH3 sensor only and both NOX and NH3 sensors were evaluated. The NOX only configuration had the worst performance, the NH3 sensor only configuration was in the middle and both the NOX and NH3 sensor combination provided the best performance.
Resumo:
In this report, we develop an intelligent adaptive neuro-fuzzy controller by using adaptive neuro fuzzy inference system (ANFIS) techniques. We begin by starting with a standard proportional-derivative (PD) controller and use the PD controller data to train the ANFIS system to develop a fuzzy controller. We then propose and validate a method to implement this control strategy on commercial off-the-shelf (COTS) hardware. An analysis is made into the choice of filters for attitude estimation. These choices are limited by the complexity of the filter and the computing ability and memory constraints of the micro-controller. Simplified Kalman filters are found to be good at estimation of attitude given the above constraints. Using model based design techniques, the models are implemented on an embedded system. This enables the deployment of fuzzy controllers on enthusiast-grade controllers. We evaluate the feasibility of the proposed control strategy in a model-in-the-loop simulation. We then propose a rapid prototyping strategy, allowing us to deploy these control algorithms on a system consisting of a combination of an ARM-based microcontroller and two Arduino-based controllers. We then use a combination of the code generation capabilities within MATLAB/Simulink in combination with multiple open-source projects in order to deploy code to an ARM CortexM4 based controller board. We also evaluate this strategy on an ARM-A8 based board, and a much less powerful Arduino based flight controller. We conclude by proving the feasibility of fuzzy controllers on Commercial-off the shelf (COTS) hardware, we also point out the limitations in the current hardware and make suggestions for hardware that we think would be better suited for memory heavy controllers.
Resumo:
Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment (“relaxation” vs. “stress”) are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the “relaxation” vs. “stress” states.
Resumo:
Este estudo centra-se na influência da variabilidade na prática da Ginástica Rítmica e no desenvolvimento motor a partir da perspectiva ecológica. O Objetivo primordial desta investigação foi analisar o ensino aprendizagem e o desenvolvimento das crianças praticantes desta modalidade, comparando os métodos de ensino tradicional, onde as crianças reproduzem os gestos técnicos impostos pelas treinadora, com o ensino utilizando a variabliade sustentada pela teoria dos sistemas dinâmicos. Nesta investigação participaram 50 crianças entre 7 e 10 anos da classe de formação. A escolha foi intencional. Para este estudo utilizamos a metodologia quantitativa e qualitativa. A metodologia quantitativa foi utilizada no teste de coordenação motora KTK, no qual usamos para verificar o nível de coordenação dos grupos estudados. Conforme as análises no escalão 7- 8 pós-teste no método tradicional 69%, variabilidade 50%. No escalão 9 -10 pós-teste método tradicional 34%, variabilidade 77% de ganho no desenvolvimento motor geral. O programa Microsoft R (teste de Tukey, para tradicional 37,54 e variabilidade 45,39) e Excel 2007 foram utilizados na análise estatística dos dados. A metodologia qualitativa foi utilizada para recolher e analisar os dados obtidos através da observação da filmagem das aulas, com o objetivo de verificar a eficácia dos método aplicados. Através dos dados da avaliação qualitativa, compreendemos neste estudo que o atleta é um agente que está em contínua mudança. Através dos relatórios avaliativos e com base nas teorias investigadas, constatamos que os modelos de ensino tradicional e ecológico são divergentes na sua complexidade, que se densificam através da interação, atleta-meio, explorando diferentes affordances. Neste sentido necessitamos de novas estratégias de ensino diferenciadas a partir de um complexo ecológico, para atender as necessidades de um desporto eco-dinâmico que é a Ginástica Rítmica.
Resumo:
A Condução de um veículo automóvel tem-se tornado cada vez mais automatizada ao longo dos anos. Este processo tem-se vindo a desenvolver com o intuito de facilitar cada vez mais a condução dos veículos, de prevenir acidentes rodoviários ou ainda de ser utilizado em operações militares. Este trabalho tem como principal foco o estudo da aplicação de dois dos principais métodos de cálculo da localização probabilística à localização de um veículo automóvel autónomo: o Filtro Estendido de Kalman e o Filtro de Partículas. Foram implementadas várias versões de ambos os filtros, quer em simulação, quer num veículo real, tendo por base, em termos de sensores, a odometria do robô, uma unidade inercial e um GPS. Este trabalho surge na continuação de vários projetos anteriores, com foco no desenvolvimento de um veículo autónomo de baixo custo, acrescentando-lhe a funcionalidade de se localizar. Neste sentido, o trabalho desenvolvido servirá de base para a navegação autónoma em trabalhos futuros.