172 resultados para covariance estimator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Island races of passerine birds display repeated evolution towards larger body size compared with their continental ancestors. The Capricorn silvereye (Zosterops lateralis chlorocephalus) has become up to six phenotypic standard deviations bigger in several morphological measures since colonization of an island approximately 4000 years ago. We estimated the genetic variance-covariance (G) matrix using full-sib and 'animal model' analyses, and selection gradients, for six morphological traits under field conditions in three consecutive cohorts of nestlings. Significant levels of genetic variance were found for all traits. Significant directional selection was detected for wing and tail lengths in one year and quadratic selection on culmen depth in another year. Although selection gradients on many traits were negative, the predicted evolutionary response to selection of these traits for all cohorts was uniformly positive. These results indicate that the G matrix and predicted evolutionary responses are consistent with those of a population evolving in the manner observed in the island passerine trend, that is, towards larger body size.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of techniques for evaluating multivariate volatility forecasts are not yet as well understood as their univariate counterparts. This paper aims to evaluate the efficacy of a range of traditional statistical-based methods for multivariate forecast evaluation together with methods based on underlying considerations of economic theory. It is found that a statistical-based method based on likelihood theory and an economic loss function based on portfolio variance are the most effective means of identifying optimal forecasts of conditional covariance matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the context of ambiguity resolution (AR) of Global Navigation Satellite Systems (GNSS), decorrelation among entries of an ambiguity vector, integer ambiguity search and ambiguity validations are three standard procedures for solving integer least-squares problems. This paper contributes to AR issues from three aspects. Firstly, the orthogonality defect is introduced as a new measure of the performance of ambiguity decorrelation methods, and compared with the decorrelation number and with the condition number which are currently used as the judging criterion to measure the correlation of ambiguity variance-covariance matrix. Numerically, the orthogonality defect demonstrates slightly better performance as a measure of the correlation between decorrelation impact and computational efficiency than the condition number measure. Secondly, the paper examines the relationship of the decorrelation number, the condition number, the orthogonality defect and the size of the ambiguity search space with the ambiguity search candidates and search nodes. The size of the ambiguity search space can be properly estimated if the ambiguity matrix is decorrelated well, which is shown to be a significant parameter in the ambiguity search progress. Thirdly, a new ambiguity resolution scheme is proposed to improve ambiguity search efficiency through the control of the size of the ambiguity search space. The new AR scheme combines the LAMBDA search and validation procedures together, which results in a much smaller size of the search space and higher computational efficiency while retaining the same AR validation outcomes. In fact, the new scheme can deal with the case there are only one candidate, while the existing search methods require at least two candidates. If there are more than one candidate, the new scheme turns to the usual ratio-test procedure. Experimental results indicate that this combined method can indeed improve ambiguity search efficiency for both the single constellation and dual constellations respectively, showing the potential for processing high dimension integer parameters in multi-GNSS environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background and aims: Lower-limb lymphoedema is a serious and feared sequela after treatment for gynaecological cancer. Given the limited prospective data on incidence of and risk factors for lymphoedema after treatment for gynaecological cancer we initiated a prospective cohort study in 2008. Methods: Data were available for 353 women with malignant disease. Participants were assessed before treatment and at regular intervals after treatment for two years. Follow-up visits were grouped into time-periods of six weeks to six months (time 1), nine months to 15 months (time 2), and 18 months to 24 months (time 3). Preliminary data analyses were undertaken up to time 2 using generalised estimating equations to model the repeated measures data of Functional Assessment of Cancer Therapy-General (FACT-G) quality of life (QoL) scores and self-reported swelling at each follow-up period (best-fitting covariance structure). Results: Depending on the time-period, between 30% and 40% of patients self-reported swelling of the lower limb. The QoL of those with self-reported swelling was lower at all time-periods compared with those who did not have swelling. Mean (95% CI) FACT-G scores at time 0, 1 and 2 were 80.7 (78.2, 83.2), 83.0 (81.0, 85.0) and 86.3 (84.2, 88.4), respectively for those with swelling and 85.0 (83.0, 86.9), 86.0 (84.1, 88.0) and 88.9 (87.0, 90.7), respectively for those without swelling. Conclusions: Lower-limb swelling adversely influences QoL and change in QoL over time in patients with gynaecological cancer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliable ambiguity resolution (AR) is essential to Real-Time Kinematic (RTK) positioning and its applications, since incorrect ambiguity fixing can lead to largely biased positioning solutions. A partial ambiguity fixing technique is developed to improve the reliability of AR, involving partial ambiguity decorrelation (PAD) and partial ambiguity resolution (PAR). Decorrelation transformation could substantially amplify the biases in the phase measurements. The purpose of PAD is to find the optimum trade-off between decorrelation and worst-case bias amplification. The concept of PAR refers to the case where only a subset of the ambiguities can be fixed correctly to their integers in the integer least-squares (ILS) estimation system at high success rates. As a result, RTK solutions can be derived from these integer-fixed phase measurements. This is meaningful provided that the number of reliably resolved phase measurements is sufficiently large for least-square estimation of RTK solutions as well. Considering the GPS constellation alone, partially fixed measurements are often insufficient for positioning. The AR reliability is usually characterised by the AR success rate. In this contribution an AR validation decision matrix is firstly introduced to understand the impact of success rate. Moreover the AR risk probability is included into a more complete evaluation of the AR reliability. We use 16 ambiguity variance-covariance matrices with different levels of success rate to analyse the relation between success rate and AR risk probability. Next, the paper examines during the PAD process, how a bias in one measurement is propagated and amplified onto many others, leading to more than one wrong integer and to affect the success probability. Furthermore, the paper proposes a partial ambiguity fixing procedure with a predefined success rate criterion and ratio-test in the ambiguity validation process. In this paper, the Galileo constellation data is tested with simulated observations. Numerical results from our experiment clearly demonstrate that only when the computed success rate is very high, the AR validation can provide decisions about the correctness of AR which are close to real world, with both low AR risk and false alarm probabilities. The results also indicate that the PAR procedure can automatically chose adequate number of ambiguities to fix at given high-success rate from the multiple constellations instead of fixing all the ambiguities. This is a benefit that multiple GNSS constellations can offer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In most visual mapping applications suited to Autonomous Underwater Vehicles (AUVs), stereo visual odometry (VO) is rarely utilised as a pose estimator as imagery is typically of very low framerate due to energy conservation and data storage requirements. This adversely affects the robustness of a vision-based pose estimator and its ability to generate a smooth trajectory. This paper presents a novel VO pipeline for low-overlap imagery from an AUV that utilises constrained motion and integrates magnetometer data in a bi-objective bundle adjustment stage to achieve low-drift pose estimates over large trajectories. We analyse the performance of a standard stereo VO algorithm and compare the results to the modified vo algorithm. Results are demonstrated in a virtual environment in addition to low-overlap imagery gathered from an AUV. The modified VO algorithm shows significantly improved pose accuracy and performance over trajectories of more than 300m. In addition, dense 3D meshes generated from the visual odometry pipeline are presented as a qualitative output of the solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a new approach for state estimation of angles and frequencies of equivalent areas in large power systems with synchronized phasor measurement units. Defining coherent generators and their correspondent areas, generators are aggregated and system reduction is performed in each area of inter-connected power systems. The structure of the reduced system is obtained based on the characteristics of the reduced linear model and measurement data to form the non-linear model of the reduced system. Then a Kalman estimator is designed for the reduced system to provide an equivalent dynamic system state estimation using the synchronized phasor measurement data. The method is simulated on two test systems to evaluate the feasibility of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to forecast machinery health is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models which attempt to forecast machinery health based on condition data such as vibration measurements. This paper demonstrates how the population characteristics and condition monitoring data (both complete and suspended) of historical items can be integrated for training an intelligent agent to predict asset health multiple steps ahead. The model consists of a feed-forward neural network whose training targets are asset survival probabilities estimated using a variation of the Kaplan–Meier estimator and a degradation-based failure probability density function estimator. The trained network is capable of estimating the future survival probabilities when a series of asset condition readings are inputted. The output survival probabilities collectively form an estimated survival curve. Pump data from a pulp and paper mill were used for model validation and comparison. The results indicate that the proposed model can predict more accurately as well as further ahead than similar models which neglect population characteristics and suspended data. This work presents a compelling concept for longer-range fault prognosis utilising available information more fully and accurately.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces a high-speed, 100Hz, visionbased state estimator that is suitable for quadrotor control in close quarters manoeuvring applications. We describe the hardware and algorithms for estimating the state of the quadrotor. Experimental results for position, velocity and yaw angle estimators are presented and compared with motion capture data. Quantitative performance comparison with state-of-the-art achievements are also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Suggests an alternative and computationally simpler approach of non-random sampling of labour economics and represents an observed outcome of an individual female′s choice of whether or not to participate in the labour market. Concludes that there is an alternative to the Heckman two-step estimator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Immigration has played an important role in the historical development of Australia. Thus, it is no surprise that a large body of empirical work has developed, which focuses upon how migrants fare in the land of opportunity. Much of the literature is comparatively recent, i.e. the last ten years or so, encouraged by the advent of public availability of Australian crosssection micro data. Several different aspects of migrant welfare have been addressed, with major emphasis being placed upon earnings and unemployment experience. For recent examples see Haig (1980), Stromback (1984), Chiswick and Miller (1985), Tran-Nam and Nevile (1988) and Beggs and Chapman (1988). The present paper contributes to the literature by providing additional empirical evidence on the native/migrant earnings differential. The data utilised are from the rather neglected Australian Bureau of Statistics, ABS Special Supplementary Survey No.4. 1982, otherwise known as the Family Survey. The paper also examines the importance of distinguishing between the wage and salary sector and the self-employment sector when discussing native/migrant differentials. Separate earnings equations for the two labour market groups are estimated and the native/migrant earnings differential is broken down by employment status. This is a novel application in the Australian context and provides some insight into the earnings of the selfemployed, a group that despite its size (around 20 per cent of the labour force) is frequently ignored by economic research. Most previous empirical research fails to examine the effect of employment status on earnings. Stromback (1984) includes a dummy variable representing self-employment status in an earnings equation estimated over a pooled sample of paid and self-employed workers. The variable is found to be highly significant, which leads Stromback to question the efficacy of including the self-employed in the estimation sample. The suggestion is that part of self-employed earnings represent a return to non-human capital investment, i.e. investments in machinery, buildings etc, the structural determinants of earnings differ significantly from those for paid employees. Tran-Nam and Nevile (1988) deal with differences between paid employees and the selfemployed by deleting the latter from their sample. However, deleting the self-employed from the estimation sample may lead to bias in the OLS estimation method (see Heckman 1979). The desirable properties of OLS are dependent upon estimation on a random sample. Thus, the 'Ran-Nam and Nevile results are likely to suffer from bias unless individuals are randomly allocated between self-employment and paid employment. The current analysis extends Tran-Nam and Nevile (1988) by explicitly treating the choice of paid employment versus self-employment as being endogenously determined. This allows an explicit test for the appropriateness of deleting self-employed workers from the sample. Earnings equations that are corrected for sample selection are estimated for both natives and migrants in the paid employee sector. The Heckman (1979) two-step estimator is employed. The paper is divided into five major sections. The next section presents the econometric model incorporating the specification of the earnings generating process together with an explicit model determining an individual's employment status. In Section 111 the data are described. Section IV draws together the main econometric results of the paper. First, the probit estimates of the labour market status equation are documented. This is followed by presentation and discussion of the Heckman two-stage estimates of the earnings specification for both native and migrant Australians. Separate earnings equations are estimated for paid employees and the self-employed. Section V documents estimates of the nativelmigrant earnings differential for both categories of employees. To aid comparison with earlier work, the Oaxaca decomposition of the earnings differential for paid-employees is carried out for both the simple OLS regression results as well as the parameter estimates corrected for sample selection effects. These differentials are interpreted and compared with previous Australian findings. A short section concludes the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a unified view of the relationship between (1) quantity and (2) price generating mechanisms in estimating individual prime construction costs/prices. A brief review of quantity generating techniques is provided with particular emphasis on experientially based assumptive approaches and this is compared with the level of pricing data available for the quantities generated in terms of reliability of the ensuing prime cost estimates. It is argued that there is a tradeoff between the reliability of quantity items and reliability of rates. Thus it is shown that the level of quantity generation is optimised by maximising the joint reliability function of the quantity items and their associated rates. Some thoughts on how this joint reliability function can be evaluated and quantified follow. The application of these ideas is described within the overall strategy of the estimator's decision - "Which estimating technique shall I use for a given level of contract information? - and a case is made for the computer generation of estimates by several methods, with an indication of the reliability of each estimate, the ultimate choice of estimate being left to the estimator concerned. Finally, the potential for the development of automatic estimating systems within this framework is examined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The construction industry contains two types of estimators the contractors' estimator and the designers' price forecaster. Each has two models of the building in which to systemize his procedures - the production model and the design model. The use of these models is discussed in the light of the industry's particular problems of complexity and uncertainty together with the pressures of the market. It is argued that estimators and forecasters, in order to function effectively in these conditions, are forced to exercise a high degree of subjective judgment. Means of eliciting good heuristics involved in judgment making are considered by reference to the artificial intelligence and construction literature and a methodology is proposed based on these findings. The results of two early trials of the method with students are given, indicating the usefulness of the approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several methods of estimating the costs or price of construction projects are now available for use in the construction industry. It is difficult due to the conservative approach of estimators and quantity surveyors, and the fact that the industry is undergoing one of its deepest recessions this century, to implement any changes in these processes. Several methods have been tried and tested and probably discarded forever, whereas other methods are still in their infancy. There is also a movement towards greater use of the computer, whichever method seems to be adopted. An important consideration with any method of estimating is the accuracy by which costs can be calculated. Any improvement in this consideration will be welcomed by a11 parties, because existing methods are poor when measured by this criteria. Estimating, particularly by contractors, has always carried some mystic, and many of the processes discussed both in the classroom and in practice are little more than fallacy when properly investigated. What makes an estimator or quantity surveyor good at forecasting the right price? To what extent does human behaviour influence or have a part to play? These and some of the other aspects of effective estimating are now examined in more detail.