980 resultados para Block motion estimation
Resumo:
Purpose: To investigate the effect of incremental increases in intraocular straylight on threshold measurements made by three modern forms of perimetry: Standard Automated Perimetry (SAP) using Octopus (Dynamic, G-Pattern), Pulsar Perimetry (PP) (TOP, 66 points) and the Moorfields Motion Displacement Test (MDT) (WEBS, 32 points).Methods: Four healthy young observers were recruited (mean age 26yrs [25yrs, 28yrs]), refractive correction [+2 D, -4.25D]). Five white opacity filters (WOF), each scattering light by different amounts were used to create incremental increases in intraocular straylight (IS). Resultant IS values were measured with each WOF and at baseline (no WOF) for each subject using a C-Quant Straylight Meter (Oculus, Wetzlar, Germany). A 25 yr old has an IS value of ~0.85 log(s). An increase of 40% in IS to 1.2log(s) corresponds to the physiological value of a 70yr old. Each WOFs created an increase in IS between 10-150% from baseline, ranging from effects similar to normal aging to those found with considerable cataract. Each subject underwent 6 test sessions over a 2-week period; each session consisted of the 3 perimetric tests using one of the five WOFs and baseline (both instrument and filter were randomised).Results: The reduction in sensitivity from baseline was calculated. A two-way ANOVA on mean change in threshold (where subjects were treated as rows in the block and each increment in fog filters was treated as column) was used to examine the effect of incremental increases in straylight. Both SAP (p<0.001) and Pulsar (p<0.001) were significantly affected by increases in straylight. The MDT (p=0.35) remained comparatively robust to increases in straylight.Conclusions: The Moorfields MDT measurement of threshold is robust to effects of additional straylight as compared to SAP and PP.
Resumo:
Wireless “MIMO” systems, employing multiple transmit and receive antennas, promise a significant increase of channel capacity, while orthogonal frequency-division multiplexing (OFDM) is attracting a good deal of attention due to its robustness to multipath fading. Thus, the combination of both techniques is an attractive proposition for radio transmission. The goal of this paper is the description and analysis of a new and novel pilot-aided estimator of multipath block-fading channels. Typical models leading to estimation algorithms assume the number of multipath components and delays to be constant (and often known), while their amplitudes are allowed to vary with time. Our estimator is focused instead on the more realistic assumption that the number of channel taps is also unknown and varies with time following a known probabilistic model. The estimation problem arising from these assumptions is solved using Random-Set Theory (RST), whereby one regards the multipath-channel response as a single set-valued random entity.Within this framework, Bayesian recursive equations determine the evolution with time of the channel estimator. Due to the lack of a closed form for the solution of Bayesian equations, a (Rao–Blackwellized) particle filter (RBPF) implementation ofthe channel estimator is advocated. Since the resulting estimator exhibits a complexity which grows exponentially with the number of multipath components, a simplified version is also introduced. Simulation results describing the performance of our channel estimator demonstrate its effectiveness.
Resumo:
This paper derives approximations allowing the estimation of outage probability for standard irregular LDPC codes and full-diversity Root-LDPC codes used over nonergodic block-fading channels. Two separate approaches are discussed: a numerical approximation, obtained by curve fitting, for both code ensembles, and an analytical approximation for Root-LDPC codes, obtained under the assumption that the slope of the iterative threshold curve of a given code ensemble matches the slope of the outage capacity curve in the high-SNR regime.
Resumo:
Tripping is considered a major cause of fall in older people. Therefore, foot clearance (i.e., height of the foot above ground during swing phase) could be a key factor to better understand the complex relationship between gait and falls. This paper presents a new method to estimate clearance using a foot-worn and wireless inertial sensor system. The method relies on the computation of foot orientation and trajectory from sensors signal data fusion, combined with the temporal detection of toe-off and heel-strike events. Based on a kinematic model that automatically estimates sensor position relative to the foot, heel and toe trajectories are estimated. 2-D and 3-D models are presented with different solving approaches, and validated against an optical motion capture system on 12 healthy adults performing short walking trials at self-selected, slow, and fast speed. Parameters corresponding to local minimum and maximum of heel and toe clearance were extracted and showed accuracy ± precision of 4.1 ± 2.3 cm for maximal heel clearance and 1.3 ± 0.9 cm for minimal toe clearance compared to the reference. The system is lightweight, wireless, easy to wear and to use, and provide a new and useful tool for routine clinical assessment of gait outside a dedicated laboratory.
Resumo:
Sophisticated magnetic resonance tagging techniques provide powerful tools for the non-invasive assessment of the local heartwall motion towards a deeper fundamental understanding of local heart function. For the extraction of motion data from the time series of magnetic resonance tagged images and for the visualization of the local heartwall motion a new image analysis procedure has been developed. New parameters have been derived which allows quantification of the motion patterns and are highly sensitive to any changes in these patterns. The new procedure has been applied for heart motion analysis in healthy volunteers and in patient collectives with different heart diseases. The achieved results are summarized and discussed.
Resumo:
A comprehensive field detection method is proposed that is aimed at developing advanced capability for reliable monitoring, inspection and life estimation of bridge infrastructure. The goal is to utilize Motion-Sensing Radio Transponders (RFIDS) on fully adaptive bridge monitoring to minimize the problems inherent in human inspections of bridges. We developed a novel integrated condition-based maintenance (CBM) framework integrating transformative research in RFID sensors and sensing architecture, for in-situ scour monitoring, state-of-the-art computationally efficient multiscale modeling for scour assessment.
Resumo:
The phyllochron is defined as the time required for the appearance of successive leaves on a plant; this characterises plant growth, development and adaptation to the environment. To check the growth and adaptation in cultivars of strawberry grown intercropped with fig trees, it was estimated the phyllochron in these production systems and in the monocrop. The experiment was conducted in greenhouses at the University of Passo Fundo (28º15'41'' S, 52º24'45'' W and 709 m) from June 8th to September 4th, 2009; this comprised the period of transplant until the 2nd flowering. The cultivars Aromas, Camino Real, Albion, Camarosa and Ventana, which seedlings were originated from the Agrícola LLahuen Nursery in Chile, as well as Festival, Camino Real and Earlibrite, originated from the Viansa S.A. Nursery in Argentina, were grown in white polyethylene bags filled with commercial substrate (Tecnomax®) and evaluated. The treatments were arranged in a randomised block design and four replicates were performed. A linear regression was realized between the leaf number (LN) in the main crown and the accumulated thermal time (ATT). The phyllochron (degree-day leaf-1) was estimated as the inverse of the angular coefficient of the linear regression. The data were submitted to ANOVA, and when significance was observed, the means were compared using the Tukey test (p < 0.05). The mean and standard deviation of phyllochrons of strawberry cultivars intercropped with fig trees varied from 149.35ºC day leaf-1 ± 31.29 in the Albion cultivar to 86.34ºC day leaf-1 ± 34.74 in the Ventana cultivar. Significant differences were observed among cultivars produced in a soilless environment with higher values recorded for Albion (199.96ºC day leaf-1 ± 29.7), which required more degree-days to produce a leaf, while cv. Ventana (85.76ºC day leaf-1 ± 11.51) exhibited a lower phyllochron mean value. Based on these results, Albion requires more degree-days to issue a leaf as compared to cv. Ventana. It was conclude that strawberry cultivars can be grown intercropped with fig trees (cv. Roxo de Valinhos).
Resumo:
Sensor-based robot control allows manipulation in dynamic environments with uncertainties. Vision is a versatile low-cost sensory modality, but low sample rate, high sensor delay and uncertain measurements limit its usability, especially in strongly dynamic environments. Force is a complementary sensory modality allowing accurate measurements of local object shape when a tooltip is in contact with the object. In multimodal sensor fusion, several sensors measuring different modalities are combined to give a more accurate estimate of the environment. As force and vision are fundamentally different sensory modalities not sharing a common representation, combining the information from these sensors is not straightforward. In this thesis, methods for fusing proprioception, force and vision together are proposed. Making assumptions of object shape and modeling the uncertainties of the sensors, the measurements can be fused together in an extended Kalman filter. The fusion of force and visual measurements makes it possible to estimate the pose of a moving target with an end-effector mounted moving camera at high rate and accuracy. The proposed approach takes the latency of the vision system into account explicitly, to provide high sample rate estimates. The estimates also allow a smooth transition from vision-based motion control to force control. The velocity of the end-effector can be controlled by estimating the distance to the target by vision and determining the velocity profile giving rapid approach and minimal force overshoot. Experiments with a 5-degree-of-freedom parallel hydraulic manipulator and a 6-degree-of-freedom serial manipulator show that integration of several sensor modalities can increase the accuracy of the measurements significantly.
Resumo:
Bone strain plays a major role as the activation signal for the bone (re)modeling process, which is vital for keeping bones healthy. Maintaining high bone mineral density reduces the chances of fracture in the event of an accident. Numerous studies have shown that bones can be strengthened with physical exercise. Several hypotheses have asserted that a stronger osteogenic (bone producing) effect results from dynamic exercise than from static exercise. These previous studies are based on short-term empirical research, which provide the motivation for justifying the experimental results with a solid mathematical background. The computer simulation techniques utilized in this work allow for non-invasive bone strain estimation during physical activity at any bone site within the human skeleton. All models presented in the study are threedimensional and actuated by muscle models to replicate the real conditions accurately. The objective of this work is to determine and present loading-induced bone strain values resulting from physical activity. It includes a comparison of strain resulting from four different gym exercises (knee flexion, knee extension, leg press, and squat) and walking, with the results reported for walking and jogging obtained from in-vivo measurements described in the literature. The objective is realized primarily by carrying out flexible multibody dynamics computer simulations. The dissertation combines the knowledge of finite element analysis and multibody simulations with experimental data and information available from medical field literature. Measured subject-specific motion data was coupled with forward dynamics simulation to provide natural skeletal movement. Bone geometries were defined using a reverse engineering approach based on medical imaging techniques. Both computed tomography and magnetic resonance imaging were utilized to explore modeling differences. The predicted tibia bone strains during walking show good agreement with invivo studies found in the literature. Strain measurements were not available for gym exercises; therefore, the strain results could not be validated. However, the values seem reasonable when compared to available walking and running invivo strain measurements. The results can be used for exercise equipment design aimed at strengthening the bones as well as the muscles during workout. Clinical applications in post fracture recovery exercising programs could also be the target. In addition, the methodology introduced in this study, can be applied to investigate the effect of weightlessness on astronauts, who often suffer bone loss after long time spent in the outer space.
Resumo:
In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.
Resumo:
L’analyse biomécanique du mouvement humain en utilisant des systèmes optoélectroniques et des marqueurs cutanés considère les segments du corps comme des corps rigides. Cependant, le mouvement des tissus mous par rapport à l'os, c’est à dire les muscles et le tissu adipeux, provoque le déplacement des marqueurs. Ce déplacement est le fait de deux composantes, une composante propre correspondant au mouvement aléatoire de chaque marqueur et une composante à l’unisson provoquant le déplacement commun des marqueurs cutanés lié au mouvement des masses sous-jacentes. Si nombre d’études visent à minimiser ces déplacements, des simulations ont montré que le mouvement des masses molles réduit la dynamique articulaire. Cette observation est faite uniquement par la simulation, car il n'existe pas de méthodes capables de dissocier la cinématique des masses molles de celle de l’os. L’objectif principal de cette thèse consiste à développer une méthode numérique capable de distinguer ces deux cinématiques. Le premier objectif était d'évaluer une méthode d'optimisation locale pour estimer le mouvement des masses molles par rapport à l’humérus obtenu avec une tige intra-corticale vissée chez trois sujets. Les résultats montrent que l'optimisation locale sous-estime de 50% le déplacement des marqueurs et qu’elle conduit à un classement de marqueurs différents en fonction de leur déplacement. La limite de cette méthode vient du fait qu'elle ne tient pas compte de l’ensemble des composantes du mouvement des tissus mous, notamment la composante en unisson. Le second objectif était de développer une méthode numérique qui considère toutes les composantes du mouvement des tissus mous. Plus précisément, cette méthode devait fournir une cinématique similaire et une plus grande estimation du déplacement des marqueurs par rapport aux méthodes classiques et dissocier ces composantes. Le membre inférieur est modélisé avec une chaine cinématique de 10 degrés de liberté reconstruite par optimisation globale en utilisant seulement les marqueurs placés sur le pelvis et la face médiale du tibia. L’estimation de la cinématique sans considérer les marqueurs placés sur la cuisse et le mollet permet d'éviter l’influence de leur déplacement sur la reconstruction du modèle cinématique. Cette méthode testée sur 13 sujets lors de sauts a obtenu jusqu’à 2,1 fois plus de déplacement des marqueurs en fonction de la méthode considérée en assurant des cinématiques similaires. Une approche vectorielle a montré que le déplacement des marqueurs est surtout dû à la composante à l’unisson. Une approche matricielle associant l’optimisation locale à la chaine cinématique a montré que les masses molles se déplacent principalement autour de l'axe longitudinal et le long de l'axe antéro-postérieur de l'os. L'originalité de cette thèse est de dissocier numériquement la cinématique os de celle des masses molles et les composantes de ce mouvement. Les méthodes développées dans cette thèse augmentent les connaissances sur le mouvement des masses molles et permettent d’envisager l’étude de leur effet sur la dynamique articulaire.
Resumo:
Suite à un stage avec la compagnie Hatch, nous possédons des jeux de données composés de séries chronologiques de vitesses de vent mesurées à divers sites dans le monde, sur plusieurs années. Les ingénieurs éoliens de la compagnie Hatch utilisent ces jeux de données conjointement aux banques de données d’Environnement Canada pour évaluer le potentiel éolien afin de savoir s’il vaut la peine d’installer des éoliennes à ces endroits. Depuis quelques années, des compagnies offrent des simulations méso-échelle de vitesses de vent, basées sur divers indices environnementaux de l’endroit à évaluer. Les ingénieurs éoliens veulent savoir s’il vaut la peine de payer pour ces données simulées, donc si celles-ci peuvent être utiles lors de l’estimation de la production d’énergie éolienne et si elles pourraient être utilisées lors de la prévision de la vitesse du vent long terme. De plus, comme l’on possède des données mesurées de vitesses de vent, l’on en profitera pour tester à partir de diverses méthodes statistiques différentes étapes de l’estimation de la production d’énergie. L’on verra les méthodes d’extrapolation de la vitesse du vent à la hauteur d’une turbine éolienne et l’on évaluera ces méthodes à l’aide de l’erreur quadratique moyenne. Aussi, on étudiera la modélisation de la vitesse du vent par la distributionWeibull et la variation de la distribution de la vitesse dans le temps. Finalement, l’on verra à partir de la validation croisée et du bootstrap si l’utilisation de données méso-échelle est préférable à celle de données des stations de référence, en plus de tester un modèle où les deux types de données sont utilisées pour prédire la vitesse du vent. Nous testerons la méthodologie globale présentement utilisée par les ingénieurs éoliens pour l’estimation de la production d’énergie d’un point de vue statistique, puis tenterons de proposer des changements à cette méthodologie, qui pourraient améliorer l’estimation de la production d’énergie annuelle.
Resumo:
This thesis presents the development of hardware, theory, and experimental methods to enable a robotic manipulator arm to interact with soils and estimate soil properties from interaction forces. Unlike the majority of robotic systems interacting with soil, our objective is parameter estimation, not excavation. To this end, we design our manipulator with a flat plate for easy modeling of interactions. By using a flat plate, we take advantage of the wealth of research on the similar problem of earth pressure on retaining walls. There are a number of existing earth pressure models. These models typically provide estimates of force which are in uncertain relation to the true force. A recent technique, known as numerical limit analysis, provides upper and lower bounds on the true force. Predictions from the numerical limit analysis technique are shown to be in good agreement with other accepted models. Experimental methods for plate insertion, soil-tool interface friction estimation, and control of applied forces on the soil are presented. In addition, a novel graphical technique for inverting the soil models is developed, which is an improvement over standard nonlinear optimization. This graphical technique utilizes the uncertainties associated with each set of force measurements to obtain all possible parameters which could have produced the measured forces. The system is tested on three cohesionless soils, two in a loose state and one in a loose and dense state. The results are compared with friction angles obtained from direct shear tests. The results highlight a number of key points. Common assumptions are made in soil modeling. Most notably, the Mohr-Coulomb failure law and perfectly plastic behavior. In the direct shear tests, a marked dependence of friction angle on the normal stress at low stresses is found. This has ramifications for any study of friction done at low stresses. In addition, gradual failures are often observed for vertical tools and tools inclined away from the direction of motion. After accounting for the change in friction angle at low stresses, the results show good agreement with the direct shear values.
Resumo:
We describe the key role played by partial evaluation in the Supercomputing Toolkit, a parallel computing system for scientific applications that effectively exploits the vast amount of parallelism exposed by partial evaluation. The Supercomputing Toolkit parallel processor and its associated partial evaluation-based compiler have been used extensively by scientists at MIT, and have made possible recent results in astrophysics showing that the motion of the planets in our solar system is chaotically unstable.