7 resultados para Derivations

em Queensland University of Technology - ePrints Archive


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Identifying crash “hotspots”, “blackspots”, “sites with promise”, or “high risk” locations is standard practice in departments of transportation throughout the US. The literature is replete with the development and discussion of statistical methods for hotspot identification (HSID). Theoretical derivations and empirical studies have been used to weigh the benefits of various HSID methods; however, a small number of studies have used controlled experiments to systematically assess various methods. Using experimentally derived simulated data—which are argued to be superior to empirical data, three hot spot identification methods observed in practice are evaluated: simple ranking, confidence interval, and Empirical Bayes. Using simulated data, sites with promise are known a priori, in contrast to empirical data where high risk sites are not known for certain. To conduct the evaluation, properties of observed crash data are used to generate simulated crash frequency distributions at hypothetical sites. A variety of factors is manipulated to simulate a host of ‘real world’ conditions. Various levels of confidence are explored, and false positives (identifying a safe site as high risk) and false negatives (identifying a high risk site as safe) are compared across methods. Finally, the effects of crash history duration in the three HSID approaches are assessed. The results illustrate that the Empirical Bayes technique significantly outperforms ranking and confidence interval techniques (with certain caveats). As found by others, false positives and negatives are inversely related. Three years of crash history appears, in general, to provide an appropriate crash history duration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new accelerometer, the Kenz Lifecorder EX (LC; Suzuken Co. Ltd, Nagoya, Japan), offers promise as a feasible monitor alternative to the commonly used Actigraph (AG: Actigraph LLC, Fort Walton Beach, FL). Purpose: This study compared the LC and AG accelerometers and the Yamax SW-200 pedometer (DW) under free-living conditions with regard to children's steps taken and time in light-intensity physical activity (PA) and moderate to vigorous PA (MVPA). Methods: Participants (N = 31, age = 10.2 ± 0.4 yr) wore LC, AG, and DW monitors from arrival at school (7:45 a.m.) until they went to bed. Time in light and MVPA intensities were calculated using two separate intensity classifications for the LC (LC_4 and LC_5) and four classifications for the AG (AG_Treuth, AG_Puyau, AG_Trost, and AG_Freedson). Both accelerometers provided steps as outputs. DW steps were self-recorded. Repeated-measures ANOVA was used to assess overlapping monitor outputs. Results: There was no difference between DW and LC steps (Δ = 200 steps), but a nonsignificant trend was observed in the pairwise comparison between DW and AG steps (Δ = 1001 steps, P = 0.058). AG detected significantly greater steps than the LC (Δ = 801 steps, P = 0.001). Estimates of light-intensity activity minutes ranged from a low of 75.6 ± 18.4 min (LC_4) to a high of 309 ± 69.2 min (AG_Treuth). Estimates of MVPA minutes ranged from a low of 25.9 ± 9.4 min (LC_5) to a high of 112.2 ± 34.5 min (AG_Freedson). No significant differences in MVPA were seen between LC_5 and AG_Treuth (Δ = 4.9 min) or AG_Puyau (Δ = 1.7 min). Conclusion: The LC detected a comparable number of steps as the DW but significantly fewer steps than the AG in children. Current results indicate that the LC_5 and either AG_Treuth or AG_Puyau intensity derivations provide similar mean estimates of time in MVPA during-free living activity in 10-yr-old children.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several differ- ent approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational struc- tures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Shaky Ground was a solo exhibition of works by Charles Robb held at Ryan Renshaw gallery, Brisbane in 2012. The exhibition comprised three sculptural works: a white rotating roundel with a drawing of the artist as seen from above; an artificial rock with a spinning aniseed ball nestled in one of its fissures; and a sculptural portrait of the artist dressed in a protective dust suit which was mounted perpendicular to the wall. The works were derivations or reorientations of previously exhibited work and established an ambiguous field of associations with each other based on formal characteristics or their proximity to the production site and processes. In so doing, the work formed part of the artist's ongoing exploration of sculpture, subjectivity and autogenous approaches to art practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motion of marine vessels has traditionally been studied using two different approaches: manoeuvring and seakeeping. These two approaches use different reference frames and coordinate systems to describe the motion. This paper derives the kinematic models that characterize the transformation of motion variables (position, velocity, accelerations) and forces between the different coordinate systems used in these theories. The derivations hereby presented are done in terms of the formalism adopted in robotics. The advantage of this formulation is the use of matrix notation and operations. As an application, the transformation of linear equations of motion used in seakeeping into body-fixed coordinates is considered for both zero and forward speed.