28 resultados para mean-variance estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a sampling procedure called selected ranked set sampling (SRSS), in which only selected observations from a ranked set sample (RSS) are measured. This paper describes the optimal linear estimation of location and scale parameters based on SRSS, and for some distributions it presents the required tables for optimal selections. For these distributions, the optimal SRSS estimators are compared with the other popular simple random sample (SRS) and RSS estimators. In every situation the estimators based on SRSS are found advantageous at least in some respect, compared to those obtained from SRS or RSS. The SRSS method with errors in ranking is also described. The relative precision of the estimator of the population mean is investigated for different degrees of correlations between the actual and erroneous ranking. The paper reports the minimum value of the correlation coefficient between the actual and the erroneous ranking required for achieving better precision with respect to the usual SRS estimator and with respect to the RSS estimator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports on a quantitative exposure assessment and on an analysis of the attributes of the data used in the estimations, in particular distinguishing between its uncertainty and variability. A retrospective assessment of exposure to benzene was carried out for a case control study of leukaemia in the Australian petroleum industry. The study used the mean of personal task-based measurements (Base Estimates) in a deterministic algorithm and applied factors to model back to places, times etc for which no exposure measurements were available. Mean daily exposures were estimated, on an individual subject basis, by summing the task-based exposures. These mean exposures were multiplied by the years spent on each job to provide exposure estimates in ppm-years. These were summed to provide a Cumulative Estimate for each subject. Validation was completed for the model and key inputs. Exposures were low, most jobs were below TWA of 5 ppm benzene. Exposures in terminals were generally higher than at refineries. Cumulative Estimates ranged from 0.005 to 50.9 ppm-years, with 84 percent less than 10 ppm-years. Exposure probability distributions were developed for tanker drivers using Monte Carlo simulation of the exposure estimation algorithm. The outcome was a lognormal distribution of exposure for each driver. These provide the basis for alternative risk assessment metrics e.g. the frequency of short but intense exposures which provided only a minimal contribution to the long-term average exposure but may increase risk of leukaemia. The effect of different inputs to the model were examined and their significance assessed using Monte Carlo simulation. The Base Estimates were the most important determinant of exposure in the model. The sources of variability in the measured data were examined, including the effect of having censored data and the between and within-worker variability. The sources of uncertainty in the exposure estimates were analysed and consequential improvements in exposure assessment identified. Monte Carlo sampling was also used to examine the uncertainties and variability associated with the tanker drivers' exposure assessment, to derive an estimate of the range and to put confidence intervals on the daily mean exposures. The identified uncertainty was less than the variability associated with the estimates. The traditional approach to exposure estimation typically derives only point estimates of mean exposure. The approach developed here allows a range of exposure estimates to be made and provides a more flexible and improved basis for risk assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The residential market in Melbourne is often referred to as the ‘auction capital of the world’ with approximately 30-35% of housing transfers undertaken via the auction process, most of which are conducted on the weekend and then reported in the media the following day. The most quoted measurement of auction success is via the clearance rate which simply indicates the proportion of signed contracts of sale within the auction process. At the same time the clearance rate can have a relatively large variance where the residential market can traditionally range from very good (i.e. a high clearance rate) to very poor (i.e. a low clearance rate). The subsequent effect on the market can directly increase or decrease demand, predominantly based only on this single measure of the perceived level of auction clearance rates only.

This paper examines the concept of the auction clearance rates and the heavy reliance on the only one measure of success (i.e. the clearance rates), regardless of other variables. The emphasis is placed on the auction clearance rate as one measure of demand in the housing market but within the context of the definition of market value i.e. willing buyer-willing seller. This is supported by a discussion about other variables including the asking price, the auction process itself, marketing considerations and seasonal adjustments. The findings provide an insight into how to correctly interpret the auction clearance rate in the context of the overall supply-demand interactions. Whilst the auction process is clearly an integral part of the residential transfer process it is essential that the auction clearance rate is used with caution and also in conjunction with other variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long term evolution (LTE) is designed for high speed data rate, higher spectral efficiency, and lower latency as well as high-capacity voice support. LTE uses single carrierfrequency division multiple access (SC-FDMA) scheme for the uplink transmission and orthogonal frequency division multiple access (OFDMA) in downlink. The one of the most important challenges for a terminal implementation are channel estimation (CE) and equalization. In this paper, a minimum mean square error (MMSE) based channel estimator is proposed for an OFDMA systems that can avoid the ill-conditioned least square (LS) problem with lower computational complexity. This channel estimation technique uses knowledge of channel properties to estimate the unknown channel transfer function at non-pilot subcarriers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Missing data imputation is a key issue in learning from incomplete data. Various techniques have been developed with great successes on dealing with missing values in data sets with homogeneous attributes (their independent attributes are all either continuous or discrete). This paper studies a new setting of missing data imputation, i.e., imputing missing data in data sets with heterogeneous attributes (their independent attributes are of different types), referred to as imputing mixed-attribute data sets. Although many real applications are in this setting, there is no estimator designed for imputing mixed-attribute data sets. This paper first proposes two consistent estimators for discrete and continuous missing target values, respectively. And then, a mixture-kernel-based iterative estimator is advocated to impute mixed-attribute data sets. The proposed method is evaluated with extensive experiments compared with some typical algorithms, and the result demonstrates that the proposed approach is better than these existing imputation methods in terms of classification accuracy and root mean square error (RMSE) at different missing ratios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in telemetry technology have created a wealth of tracking data available for many animal species moving over spatial scales from tens of meters to tens of thousands of kilometers. Increasingly, such data sets are being used for quantitative movement analyses aimed at extracting fundamental biological signals such as optimal searching behavior and scale-dependent foraging decisions. We show here that the location error inherent in various tracking technologies reduces the ability to detect patterns of behavior within movements. Our analyses endeavored to set out a series of initial ground rules for ecologists to help ensure that sampling noise is not misinterpreted as a real biological signal. We simulated animal movement tracks using specialized random walks known as Lévy flights at three spatial scales of investigation: 100-km, 10-km, and 1-km maximum daily step lengths. The locations generated in the simulations were then blurred using known error distributions associated with commonly applied tracking methods: the Global Positioning System (GPS), Argos polar-orbiting satellites, and light-level geolocation. Deviations from the idealized Lévy flight pattern were assessed for each track after incrementing levels of location error were applied at each spatial scale, with additional assessments of the effect of error on scale-dependent movement patterns measured using fractal mean dimension and first-passage time (FPT) analyses. The accuracy of parameter estimation (Lévy μ, fractal mean D, and variance in FPT) declined precipitously at threshold errors relative to each spatial scale. At 100-km maximum daily step lengths, error standard deviations of ≥10 km seriously eroded the biological patterns evident in the simulated tracks, with analogous thresholds at the 10-km and 1-km scales (error SD ≥ 1.3 km and 0.07 km, respectively). Temporal subsampling of the simulated tracks maintained some elements of the biological signals depending on error level and spatial scale. Failure to account for large errors relative to the scale of movement can produce substantial biases in the interpretation of movement patterns. This study provides researchers with a framework for understanding the limitations of their data and identifies how temporal subsampling can help to reduce the influence of spatial error on their conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article investigates the impact of oil price volatility on six major emerging economies in Asia using time-series cross-section and time-series econometric techniques. To assess the robustness of the findings, we further implement such heterogeneous panel data estimation methods as Mean Group (MG), Common Correlated Effects Mean Group (CCEMG) and Augmented Mean Group (AMG) estimators to allow for cross-sectional dependence. The empirical results reveal that oil price volatility has a detrimental effect on these emerging economies. In the short run, oil price volatility influenced output growth in China and affected both GDP growth and inflation in India. In the Philippines, oil price volatility impacted on inflation, but in Indonesia, it impacted on both GDP growth and inflation before and after the Asian financial crisis. In Malaysia, oil price volatility impacted on GDP growth, although there is notably little feedback from the opposite side. For Thailand, oil price volatility influenced output growth prior to the Asian financial crisis, but the impact disappeared after the crisis. It appears that oil subsidization by the Thai Government via introduction of the oil fund played a significant role in improving the economic performance by lessening the adverse effects of oil price volatility on macroeconomic indicators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a simple procedure for data dependent determination of the number of lags and leads to use in feasible estimation of cointegrated panel regressions. Results from Monte Carlo simulations suggests that the feasible estimators considered enjoys excellent precision in terms of root mean squared error and reasonable power with effective size hovering close to the nominal level. The good performance of the feasible estimators is verified empirically through an application to the long run money demand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2015, Springer-Verlag Berlin Heidelberg. Anti-predator behavior is a key aspect of life history evolution, usually studied at the population (mean), or across-individual levels. However individuals can also differ in their intra-individual (residual) variation, but to our knowledge, this has only been studied once before in free-living animals. Here we studied the distances moved and changes in nest height and concealment between successive nesting attempts of marked pairs of grey fantails (Rhipidura albiscapa) in relation to nest fate, across the breeding season. We predicted that females (gender that decides where the nest is placed) should on average show adaptive behavioral responses to the experience of prior predation risk such that after an unsuccessful nesting attempt, replacement nests should be further away, higher from the ground, and more concealed compared with replacement nests after successful nesting attempts. We found that, on average, females moved greater distances to re-nest after unsuccessful nesting attempts (abandoned or depredated) in contrast to after a successful attempt, suggesting that re-nesting decisions are sensitive to risk. We found no consistent across-individual differences in distances moved, heights, or concealment. However, females differed by 53-fold (or more) in their intra-individual variability (i.e., predictability) with respect to distances moved and changes in nest height between nesting attempts, indicating that either some systematic variation went unexplained and/or females have inherently different predictability. Ignoring these individual differences in residual variance in our models obscured the effect of nest fate on re-nesting decisions that were evident at the mean level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we investigate the channel estimation problem for multiple-input multiple-output (MIMO) relay communication systems with time-varying channels. The time-varying characteristic of the channels is described by the complex-exponential basis expansion model (CE-BEM). We propose a superimposed channel training algorithm to estimate the individual first-hop and second-hop time-varying channel matrices for MIMO relay systems. In particular, the estimation of the second-hop time-varying channel matrix is performed by exploiting the superimposed training sequence at the relay node, while the first-hop time-varying channel matrix is estimated through the source node training sequence and the estimated second-hop channel. To improve the performance of channel estimation, we derive the optimal structure of the source and relay training sequences that minimize the mean-squared error (MSE) of channel estimation. We also optimize the relay amplification factor that governs the power allocation between the source and relay training sequences. Numerical simulations demonstrate that the proposed superimposed channel training algorithm for MIMO relay systems with time-varying channels outperforms the conventional two-stage channel estimation scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2002-2012 IEEE. In this paper, we investigate the channel estimation problem for two-way multiple-input multiple-output (MIMO) relay communication systems in frequency-selective fading environments. We apply the method of superimposed channel training to estimate the individual channel state information (CSI) of the first-hop and second-hop links for two-way MIMO relay systems with frequency-selective fading channels. In this algorithm, a relay training sequence is superimposed on the received signals at the relay node to assist the estimation of the second-hop channel matrices. The optimal structure of the source and relay training sequences is derived to minimize the mean-squared error (MSE) of channel estimation. Moreover, the optimal power allocation between the source and relay training sequences is derived to improve the performance of channel estimation. Numerical examples are shown to demonstrate the performance of the proposed superimposed channel training algorithm for two-way MIMO relay systems in frequency-selective fading environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The computer based human motion tracking systems are widely used in medicine and sports. The accurate determination of limb lengths is crucial for not only constructing the limb motion trajectories which are used for evaluation process of human kinematics, but also individually recognising human beings. Yet, as the common practice, the limb lengths are measured manually which is inconvenient, time-consuming and requires professional knowledge. In this paper, the estimation process of limb lengths is automated with a novel algorithm calculating curvature using the measurements from inertial sensors. The proposed algorithm was validated with computer simulations and experiments conducted with four healthy subjects. The experiment results show the significantly low root mean squared error percentages such as upper arm - 5.16%, upper limbs - 5.09%, upper leg - 2.56% and lower extremities - 6.64% compared to measured lengths.