969 resultados para Motion estimation
Resumo:
In this paper a nonlinear optimal controller has been designed for aerodynamic control during the reentry phase of the Reusable Launch Vehicle (RLV). The controller has been designed based on a recently developed technique Optimal Dynamic Inversion (ODI). For full state feedback the controller has required full information about the system states. In this work an Extended Kalman filter (EKF) is developed to estimate the states. The vehicle (RLV) has been has been consider as a nonlinear Six-Degree-Of-Freedom (6-DOF) model. The simulation results shows that EKF gives a very good estimation of the states and it is working well with ODI. The resultant trajectories are very similar to those obtained by perfect state feedback using ODI only.
Resumo:
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
Resumo:
This article draws on the design and implementation of three mobile learning projects introduced by Flanagan in 2011, 2012 and 2014 engaging a total of 206 participants. The latest of these projects is highlighted in this article. Two other projects provide additional examples of innovative strategies to engage mobile and cloud systems describing how electronic and mobile technology can help facilitate teaching and learning, assessment for learning and assessment as learning, and support communities of practice. The second section explains the theoretical premise supporting the implementation of technology and promulgates a hermeneutic phenomenological approach. The third section discusses mobility, both in terms of the exploration of wearable technology in the prototypes developed as a result of the projects, and the affordances of mobility within pedagogy. Finally the quantitative and qualitative methods in place to evaluate m-learning are explained.
Resumo:
The wave functions of moving bound states may be expected to contract in the direction of motion, in analogy to a rigid rod in classical special relativity, when the constituents are at equal (ordinary) time. Indeed, the Lorentz contraction of wave functions is often appealed to in qualitative discussions. However, only few field theory studies exist of equal-time wave functions in motion. In this thesis I use the Bethe-Salpeter formalism to study the wave function of a weakly bound state such as a hydrogen atom or positronium in a general frame. The wave function of the e^-e^+ component of positronium indeed turns out to Lorentz contract both in 1+1 and in 3+1 dimensional quantum electrodynamics, whereas the next-to-leading e^-e^+\gamma Fock component of the 3+1 dimensional theory deviates from classical contraction. The second topic of this thesis concerns single spin asymmetries measured in scattering on polarized bound states. Such spin asymmetries have so far mainly been analyzed using the twist expansion of perturbative QCD. I note that QCD vacuum effects may give rise to a helicity flip in the soft rescattering of the struck quark, and that this would cause a nonvanishing spin asymmetry in \ell p^\uparrow -> \ell' + \pi + X in the Bjorken limit. An analogous asymmetry may arise in p p^\uparrow -> \pi + X from Pomeron-Odderon interference, if the Odderon has a helicity-flip coupling. Finally, I study the possibility that the large single spin asymmetry observed in p p^\uparrow -> \pi(x_F,k_\perp) + X when the pion carries a high momentum fraction x_F of the polarized proton momentum arises from coherent effects involving the entire polarized bound state.
Resumo:
Strong motion array records are analyzed in this paper to identify and map the source zone of four past earthquakes. The source is represented as a sequence of double couples evolving as ramp functions, triggering at different instants, distributed in a region yet to be mapped. The known surface level ground motion time histories are treated as responses to the unknown double couples on the fault surface. The location, orientation, magnitude, and risetime of the double couples are found by minimizing the mean square error between analytical solution and instrumental data. Numerical results are presented for Chi-Chi, Imperial Valley, San Fernando, and Uttarakashi earthquakes. Results obtained are in good agreement with field investigations and those obtained from conventional finite fault source inversions.
Resumo:
An estimate of the groundwater budget at the catchment scale is extremely important for the sustainable management of available water resources. Water resources are generally subjected to over-exploitation for agricultural and domestic purposes in agrarian economies like India. The double water-table fluctuation method is a reliable method for calculating the water budget in semi-arid crystalline rock areas. Extensive measurements of water levels from a dense network before and after the monsoon rainfall were made in a 53 km(2)atershed in southern India and various components of the water balance were then calculated. Later, water level data underwent geostatistical analyses to determine the priority and/or redundancy of each measurement point using a cross-validation method. An optimal network evolved from these analyses. The network was then used in re-calculation of the water-balance components. It was established that such an optimized network provides far fewer measurement points without considerably changing the conclusions regarding groundwater budget. This exercise is helpful in reducing the time and expenditure involved in exhaustive piezometric surveys and also in determining the water budget for large watersheds (watersheds greater than 50 km(2)).
Resumo:
This paper deals with the development of simplified semi-empirical relations for the prediction of residual velocities of small calibre projectiles impacting on mild steel target plates, normally or at an angle, and the ballistic limits for such plates. It has been shown, for several impact cases for which test results on perforation of mild steel plates are available, that most of the existing semi-empirical relations which are applicable only to normal projectile impact do not yield satisfactory estimations of residual velocity. Furthermore, it is difficult to quantify some of the empirical parameters present in these relations for a given problem. With an eye towards simplicity and ease of use, two new regression-based relations employing standard material parameters have been discussed here for predicting residual velocity and ballistic limit for both normal and oblique impact. The latter expressions differ in terms of usage of quasi-static or strain rate-dependent average plate material strength. Residual velocities yielded by the present semi-empirical models compare well with the experimental results. Additionally, ballistic limits from these relations show close correlation with the corresponding finite element-based predictions.
Resumo:
Increased emphasis on rotorcraft performance and perational capabilities has resulted in accurate computation of aerodynamic stability and control parameters. System identification is one such tool in which the model structure and parameters such as aerodynamic stability and control derivatives are derived. In the present work, the rotorcraft aerodynamic parameters are computed using radial basis function neural networks (RBFN) in the presence of both state and measurement noise. The effect of presence of outliers in the data is also considered. RBFN is found to give superior results compared to finite difference derivatives for noisy data. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The Orthogonal Frequency Division Multiplexing (OFDM) is a form of Multi-Carrier Modulation where the data stream is transmitted over a number of carriers which are orthogonal to each other i.e. the carrier spacing is selected such that each carrier is located at the zeroes of all other carriers in the spectral domain. This paper proposes a new novel iterative frequency offset estimation algorithm for an OFDM system in order to receive the OFDM data symbols error-free over the noisy channel at the receiver and to achieve frequency synchronization between the transmitter and the receiver. The performance of this algorithm has been studied in AWGN, ADSL and SUI channels successfully.
Resumo:
The Orthogonal Frequency Division Multiplexing (OFDM) is a form of Multi-Carrier Modulation where the data stream is transmitted over a number of carriers which are orthogonal to each other i.e. the carrier spacing is selected such that each carrier is located at the zeroes of all other carriers in the spectral domain. This paper proposes a new novel sampling offset estimation algorithm for an OFDM system in order to receive the OFDM data symbols error-free over the noisy channel at the receiver and to achieve fine timing synchronization between the transmitter and the receiver. The performance of this algorithm has been studied in AWGN, ADSL and SUI channels successfully.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.