124 resultados para Data Synchronization Error


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimation of Taylor`s power law for species abundance data may be performed by linear regression of the log empirical variances on the log means, but this method suffers from a problem of bias for sparse data. We show that the bias may be reduced by using a bias-corrected Pearson estimating function. Furthermore, we investigate a more general regression model allowing for site-specific covariates. This method may be efficiently implemented using a Newton scoring algorithm, with standard errors calculated from the inverse Godambe information matrix. The method is applied to a set of biomass data for benthic macrofauna from two Danish estuaries. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interval-censored survival data, in which the event of interest is not observed exactly but is only known to occur within some time interval, occur very frequently. In some situations, event times might be censored into different, possibly overlapping intervals of variable widths; however, in other situations, information is available for all units at the same observed visit time. In the latter cases, interval-censored data are termed grouped survival data. Here we present alternative approaches for analyzing interval-censored data. We illustrate these techniques using a survival data set involving mango tree lifetimes. This study is an example of grouped survival data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a regression model considering the modified Weibull distribution. This distribution can be used to model bathtub-shaped failure rate functions. Assuming censored data, we consider maximum likelihood and Jackknife estimators for the parameters of the model. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and we also present some ways to perform global influence. Besides, for different parameter settings, sample sizes and censoring percentages, various simulations are performed and the empirical distribution of the modified deviance residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended for a martingale-type residual in log-modified Weibull regression models with censored data. Finally, we analyze a real data set under log-modified Weibull regression models. A diagnostic analysis and a model checking based on the modified deviance residual are performed to select appropriate models. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The zero-inflated negative binomial model is used to account for overdispersion detected in data that are initially analyzed under the zero-Inflated Poisson model A frequentist analysis a jackknife estimator and a non-parametric bootstrap for parameter estimation of zero-inflated negative binomial regression models are considered In addition an EM-type algorithm is developed for performing maximum likelihood estimation Then the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and some ways to perform global influence analysis are derived In order to study departures from the error assumption as well as the presence of outliers residual analysis based on the standardized Pearson residuals is discussed The relevance of the approach is illustrated with a real data set where It is shown that zero-inflated negative binomial regression models seems to fit the data better than the Poisson counterpart (C) 2010 Elsevier B V All rights reserved

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A four-parameter extension of the generalized gamma distribution capable of modelling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone and non-monotone failure rate functions, which are quite common in lifetime data analysis and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the exponentiated Weibull, exponentiated generalized half-normal, exponentiated gamma and generalized Rayleigh, among others. We derive two infinite sum representations for its moments. We calculate the density of the order statistics and two expansions for their moments. The method of maximum likelihood is used for estimating the model parameters and the observed information matrix is obtained. Finally, a real data set from the medical area is analysed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Joint generalized linear models and double generalized linear models (DGLMs) were designed to model outcomes for which the variability can be explained using factors and/or covariates. When such factors operate, the usual normal regression models, which inherently exhibit constant variance, will under-represent variation in the data and hence may lead to erroneous inferences. For count and proportion data, such noise factors can generate a so-called overdispersion effect, and the use of binomial and Poisson models underestimates the variability and, consequently, incorrectly indicate significant effects. In this manuscript, we propose a DGLM from a Bayesian perspective, focusing on the case of proportion data, where the overdispersion can be modeled using a random effect that depends on some noise factors. The posterior joint density function was sampled using Monte Carlo Markov Chain algorithms, allowing inferences over the model parameters. An application to a data set on apple tissue culture is presented, for which it is shown that the Bayesian approach is quite feasible, even when limited prior information is available, thereby generating valuable insight for the researcher about its experimental results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of airborne laser scanning (ALS) technologies in forest inventories has shown great potential to improve the efficiency of forest planning activities. Precise estimates, fast assessment and relatively low complexity can explain the good results in terms of efficiency. The evolution of GPS and inertial measurement technologies, as well as the observed lower assessment costs when these technologies are applied to large scale studies, can explain the increasing dissemination of ALS technologies. The observed good quality of results can be expressed by estimates of volumes and basal area with estimated error below the level of 8.4%, depending on the size of sampled area, the quantity of laser pulses per square meter and the number of control plots. This paper analyzes the potential of an ALS assessment to produce certain forest inventory statistics in plantations of cloned Eucalyptus spp with precision equal of superior to conventional methods. The statistics of interest in this case were: volume, basal area, mean height and dominant trees mean height. The ALS flight for data assessment covered two strips of approximately 2 by 20 Km, in which clouds of points were sampled in circular plots with a radius of 13 m. Plots were sampled in different parts of the strips to cover different stand ages. The clouds of points generated by the ALS assessment: overall height mean, standard error, five percentiles (height under which we can find 10%, 30%, 50%,70% and 90% of the ALS points above ground level in the cloud), and density of points above ground level in each percentile were calculated. The ALS statistics were used in regression models to estimate mean diameter, mean height, mean height of dominant trees, basal area and volume. Conventional forest inventory sample plots provided real data. For volume, an exploratory assessment involving different combinations of ALS statistics allowed for the definition of the most promising relationships and fitting tests based on well known forest biometric models. The models based on ALS statistics that produced the best results involved: the 30% percentile to estimate mean diameter (R(2)=0,88 and MQE%=0,0004); the 10% and 90% percentiles to estimate mean height (R(2)=0,94 and MQE%=0,0003); the 90% percentile to estimate dominant height (R(2)=0,96 and MQE%=0,0003); the 10% percentile and mean height of ALS points to estimate basal area (R(2)=0,92 and MQE%=0,0016); and, to estimate volume, age and the 30% and 90% percentiles (R(2)=0,95 MQE%=0,002). Among the tested forest biometric models, the best fits were provided by the modified Schumacher using age and the 90% percentile, modified Clutter using age, mean height of ALS points and the 70% percentile, and modified Buckman using age, mean height of ALS points and the 10% percentile.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Leaf wetness duration (LWD) models based on empirical approaches offer practical advantages over physically based models in agricultural applications, but their spatial portability is questionable because they may be biased to the climatic conditions under which they were developed. In our study, spatial portability of three LWD models with empirical characteristics - a RH threshold model, a decision tree model with wind speed correction, and a fuzzy logic model - was evaluated using weather data collected in Brazil, Canada, Costa Rica, Italy and the USA. The fuzzy logic model was more accurate than the other models in estimating LWD measured by painted leaf wetness sensors. The fraction of correct estimates for the fuzzy logic model was greater (0.87) than for the other models (0.85-0.86) across 28 sites where painted sensors were installed, and the degree of agreement k statistic between the model and painted sensors was greater for the fuzzy logic model (0.71) than that for the other models (0.64-0.66). Values of the k statistic for the fuzzy logic model were also less variable across sites than those of the other models. When model estimates were compared with measurements from unpainted leaf wetness sensors, the fuzzy logic model had less mean absolute error (2.5 h day(-1)) than other models (2.6-2.7 h day(-1)) after the model was calibrated for the unpainted sensors. The results suggest that the fuzzy logic model has greater spatial portability than the other models evaluated and merits further validation in comparison with physical models under a wider range of climate conditions. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hydrological models featuring root water uptake usually do not include compensation mechanisms such that reductions in uptake from dry layers are compensated by an increase in uptake from wetter layers. We developed a physically based root water uptake model with an implicit compensation mechanism. Based on an expression for the matric flux potential (M) as a function of the distance to the root, and assuming a depth-independent value of M at the root surface, uptake per layer is shown to be a function of layer bulk M, root surface M, and a weighting factor that depends on root length density and root radius. Actual transpiration can be calculated from the sum of layer uptake rates. The proposed reduction function (PRF) was built into the SWAP model, and predictions were compared to those made with the Feddes reduction function (FRF). Simulation results were tested against data from Canada (continuous spring wheat [(Triticum aestivum L.]) and Germany (spring wheat, winter barley [Hordeum vulgare L.], sugarbeet [Beta vulgaris L.], winter wheat rotation). For the Canadian data, the root mean square error of prediction (RMSEP) for water content in the upper soil layers was very similar for FRF and PRF; for the deeper layers, RMSEP was smaller for PRF. For the German data, RMSEP was lower for PRF in the upper layers and was similar for both models in the deeper layers. In conclusion, but dependent on the properties of the data sets available for testing,the incorporation of the new reduction function into SWAP was successful, providing new capabilities for simulating compensated root water uptake without increasing the number of input parameters or degrading model performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Leaf wetness duration (LWD) is related to plant disease occurrence and is therefore a key parameter in agrometeorology. As LWD is seldom measured at standard weather stations, it must be estimated in order to ensure the effectiveness of warning systems and the scheduling of chemical disease control. Among the models used to estimate LWD, those that use physical principles of dew formation and dew and/or rain evaporation have shown good portability and sufficiently accurate results for operational use. However, the requirement of net radiation (Rn) is a disadvantage foroperational physical models, since this variable is usually not measured over crops or even at standard weather stations. With the objective of proposing a solution for this problem, this study has evaluated the ability of four models to estimate hourly Rn and their impact on LWD estimates using a Penman-Monteith approach. A field experiment was carried out in Elora, Ontario, Canada, with measurements of LWD, Rn and other meteorological variables over mowed turfgrass for a 58 day period during the growing season of 2003. Four models for estimating hourly Rn based on different combinations of incoming solar radiation (Rg), airtemperature (T), relative humidity (RH), cloud cover (CC) and cloud height (CH), were evaluated. Measured and estimated hourly Rn values were applied in a Penman-Monteith model to estimate LWD. Correlating measured and estimated Rn, we observed that all models performed well in terms of estimating hourly Rn. However, when cloud data were used the models overestimated positive Rn and underestimated negative Rn. When only Rg and T were used to estimate hourly Rn, the model underestimated positive Rn and no tendency was observed for negative Rn. The best performance was obtained with Model I, which presented, in general, the smallest mean absolute error (MAE) and the highest C-index. When measured LWD was compared to the Penman-Monteith LWD, calculated with measured and estimated Rn, few differences were observed. Both precision and accuracy were high, with the slopes of the relationships ranging from 0.96 to 1.02 and R-2 from 0.85 to 0.92, resulting in C-indices between 0.87 and 0.93. The LWD mean absolute errors associated with Rn estimates were between 1.0 and 1.5h, which is sufficient for use in plant disease management schemes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Parana (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aiming to achieve the ideal time of ovum pick-up (OPU) for in vitro embryo production (IVP) in crossbred heifers, two Latin square design studies investigated the effect of ovarian follicular wave synchronization with estradiol benzoate (EB) and progestins. For each experiment, crossbred heifers stage of estrous cycle was synchronized either with a norgestomet ear implant (Experiment 1) or a progesterone intravaginal device (Experiment 2) for 7d, followed by the administration of 150 mu g D-cloprostenol. On Day 7, all follicles >3 mm in diameter were aspirated and implants/devices were replaced by new ones. Afterwards, implant/device replacement was conducted every 14 d. Each experiment had three treatment groups. In Experiment I (n = 12), heifers in Group 2X had their follicles aspirated twice a week and those in Groups 1X and 1X-EB were submitted to OPU once a week for a period of 28 d. Heifers from Group 1X-EB also received 2 mg EB i.m. immediately after each OPU session. In Experiment 2 (n = 11), animals from Group 0EB did not receive EB while heifers in Groups 2EB and 5EB received 2 and 5 mg of EB respectively, immediately after OPU. The OPU sessions were performed once weekly for 28 d. Therefore, in both experiments, four OPU sessions were performed in heifers aspirated once a week and in Experiment 1, eight OPU sessions were done in heifers aspirated twice a week. Additionally, during the 7-d period following follicular aspiration, ovarian ultrasonography examinations were conducted to measure diameter of the largest follicle and blood samples were collected for FSH quantification by RIA. In Experiment 1, all viable oocytes recovered were in vitro matured and fertilized. Results indicated that while progestin and EB altered follicular wave patterns, this treatment did not prevent establishment of follicular dominance on the ovaries of heifers during OPU at 7-d intervals. Furthermore, the proposed stage of follicular wave synchronization strategies did not improve the number and quality of the recovered oocytes, or the number of in vitro produced embryos. (C) 2009 Elsevier B.V. All rights reserved.