953 resultados para Nonparametric regression techniques


Relevância:

30.00% 30.00%

Publicador:

Resumo:

El Niño South Oscillation (ENSO) is one climatic phenomenon related to the inter-annual variability of global meteorological patterns influencing sea surface temperature and rainfall variability. It influences human health indirectly through extreme temperature and moisture conditions that may accelerate the spread of some vector-borne viral diseases, like dengue fever (DF). This work examines the spatial distribution of association between ENSO and DF in the countries of the Americas during 1995-2004, which includes the 1997-1998 El Niño, one of the most important climatic events of 20(th) century. Data regarding the South Oscillation index (SOI), indicating El Niño-La Niña activity, were obtained from Australian Bureau of Meteorology. The annual DF incidence (AIy) by country was computed using Pan-American Health Association data. SOI and AIy values were standardised as deviations from the mean and plotted in bars-line graphics. The regression coefficient values between SOI and AIy (rSOI,AI) were calculated and spatially interpolated by an inverse distance weighted algorithm. The results indicate that among the five years registering high number of cases (1998, 2002, 2001, 2003 and 1997), four had El Niño activity. In the southern hemisphere, the annual spatial weighted mean centre of epidemics moved southward, from 6° 31' S in 1995 to 21° 12' S in 1999 and the rSOI,AI values were negative in Cuba, Belize, Guyana and Costa Rica, indicating a synchrony between higher DF incidence rates and a higher El Niño activity. The rSOI,AI map allows visualisation of a graded surface with higher values of ENSO-DF associations for Mexico, Central America, northern Caribbean islands and the extreme north-northwest of South America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

P>Aim To compare the percentage of gutta-percha, sealer and voids and the influence of isthmuses in mesial root canals of mandibular molars filled with different techniques. Methodology Canals in 60 mesial roots of mandibular first molars were prepared with ProTaper instruments to size F2 (size 25, 0.08 taper) and filled using a single-cone, lateral compaction, System B or Thermafil techniques. An epoxy resin sealer was labelled with Rhodamine-B dye to allow analysis under a confocal microscope. The percentage of gutta-percha, sealer and area of voids was calculated at 2, 4 and 6 mm from the apex, using Image Tool 3.0 software. Statistical analysis was performed using nonparametric Kruskal-Wallis and Dunn tests (P < 0.05). The influence of isthmuses on the presence or absence of voids was evaluated using the Fisher test. Results At the 2 mm level, the percentage of gutta-percha, sealer and voids was similar amongst the System B, lateral compaction and single-cone techniques. The single-cone technique revealed significantly less gutta-percha, more sealer and voids in comparison with the Thermafil technique at the 2 and 4 mm level (P < 0.05). The analysis of all sections (2, 4 and 6 mm) revealed that more gutta-percha and less sealer and voids were found in root canals filled with Thermafil and System B techniques (P < 0.05). The Fisher test revealed that the presence of isthmuses increased the occurence of voids in the lateral compaction group only (P < 0.05). Conclusion Gutta-percha, sealer filled area and voids were dependent on the canal-filling technique. The presence of isthmuses may influence the quality of root filling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Survival analysis is applied when the time until the occurrence of an event is of interest. Such data are routinely collected in plant diseases, although applications of the method are uncommon. The objective of this study was to use two studies on post-harvest diseases of peaches, considering two harvests together and the existence of random effect shared by fruits of a same tree, in order to describe the main techniques in survival analysis. The nonparametric Kaplan-Meier method, the log-rank test and the semi-parametric Cox's proportional hazards model were used to estimate the effect of cultivars and the number of days after full bloom on the survival to the brown rot symptom and the instantaneous risk of expressing it in two consecutive harvests. The joint analysis with baseline effect, varying between harvests, and the confirmation of the tree effect as a grouping factor with random effect were appropriate to interpret the phenomenon (disease) evaluated and can be important tools to replace or complement the conventional analysis, respecting the nature of the variable and the phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT The spatial distribution of forest biomass in the Amazon is heterogeneous with a temporal and spatial variation, especially in relation to the different vegetation types of this biome. Biomass estimated in this region varies significantly depending on the applied approach and the data set used for modeling it. In this context, this study aimed to evaluate three different geostatistical techniques to estimate the spatial distribution of aboveground biomass (AGB). The selected techniques were: 1) ordinary least-squares regression (OLS), 2) geographically weighted regression (GWR) and, 3) geographically weighted regression - kriging (GWR-K). These techniques were applied to the same field dataset, using the same environmental variables derived from cartographic information and high-resolution remote sensing data (RapidEye). This study was developed in the Amazon rainforest from Sucumbíos - Ecuador. The results of this study showed that the GWR-K, a hybrid technique, provided statistically satisfactory estimates with the lowest prediction error compared to the other two techniques. Furthermore, we observed that 75% of the AGB was explained by the combination of remote sensing data and environmental variables, where the forest types are the most important variable for estimating AGB. It should be noted that while the use of high-resolution images significantly improves the estimation of the spatial distribution of AGB, the processing of this information requires high computational demand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The role of land cover change as a significant component of global change has become increasingly recognized in recent decades. Large databases measuring land cover change, and the data which can potentially be used to explain the observed changes, are also becoming more commonly available. When developing statistical models to investigate observed changes, it is important to be aware that the chosen sampling strategy and modelling techniques can influence results. We present a comparison of three sampling strategies and two forms of grouped logistic regression models (multinomial and ordinal) in the investigation of patterns of successional change after agricultural land abandonment in Switzerland. Results indicated that both ordinal and nominal transitional change occurs in the landscape and that the use of different sampling regimes and modelling techniques as investigative tools yield different results. Synthesis and applications. Our multimodel inference identified successfully a set of consistently selected indicators of land cover change, which can be used to predict further change, including annual average temperature, the number of already overgrown neighbouring areas of land and distance to historically destructive avalanche sites. This allows for more reliable decision making and planning with respect to landscape management. Although both model approaches gave similar results, ordinal regression yielded more parsimonious models that identified the important predictors of land cover change more efficiently. Thus, this approach is favourable where land cover change pattern can be interpreted as an ordinal process. Otherwise, multinomial logistic regression is a viable alternative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND It is not clear to what extent educational programs aimed at promoting diabetes self-management in ethnic minority groups are effective. The aim of this work was to systematically review the effectiveness of educational programs to promote the self-management of racial/ethnic minority groups with type 2 diabetes, and to identify programs' characteristics associated with greater success. METHODS We undertook a systematic literature review. Specific searches were designed and implemented for Medline, EMBASE, CINAHL, ISI Web of Knowledge, Scirus, Current Contents and nine additional sources (from inception to October 2012). We included experimental and quasi-experimental studies assessing the impact of educational programs targeted to racial/ethnic minority groups with type 2 diabetes. We only included interventions conducted in countries members of the OECD. Two reviewers independently screened citations. Structured forms were used to extract information on intervention characteristics, effectiveness, and cost-effectiveness. When possible, we conducted random-effects meta-analyses using standardized mean differences to obtain aggregate estimates of effect size with 95% confidence intervals. Two reviewers independently extracted all the information and critically appraised the studies. RESULTS We identified thirty-seven studies reporting on thirty-nine educational programs. Most of them were conducted in the US, with African American or Latino participants. Most programs obtained some benefits over standard care in improving diabetes knowledge, self-management behaviors and clinical outcomes. A meta-analysis of 20 randomized controlled trials (3,094 patients) indicated that the programs produced a reduction in glycated hemoglobin of -0.31% (95% CI -0.48% to -0.14%). Diabetes knowledge and self-management measures were too heterogeneous to pool. Meta-regressions showed larger reduction in glycated hemoglobin in individual and face to face delivered interventions, as well as in those involving peer educators, including cognitive reframing techniques, and a lower number of teaching methods. The long-term effects remain unknown and cost-effectiveness was rarely estimated. CONCLUSIONS Diabetes self-management educational programs targeted to racial/ethnic minority groups can produce a positive effect on diabetes knowledge and on self-management behavior, ultimately improving glycemic control. Future programs should take into account the key characteristics identified in this review.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Left ventricular hypertrophy (LVH) is common in kidney transplant (KT) recipients. LVH is associated with a worse outcome, though m-TOR therapy may help to revert this complication. We therefore conducted a longitudinal study to assess morphological and functional echocardiographic changes after conversion from CNI to m-TOR inhibitor drugs in nondiabetic KT patients who had previously received RAS blockers during the follow-up. METHODS We undertook a 1-year nonrandomized controlled study in 30 non-diabetic KT patients who were converted from calcineurin inhibitor (CNI) to m-TOR therapy. A control group received immunosuppressive therapy based on CNIs. Two echocardiograms were done during the follow-up. RESULTS Nineteen patients were switched to SRL and 11 to EVL. The m-TOR group showed a significant reduction in LVMi after 1 year (from 62 ± 22 to 55 ± 20 g/m2.7; P=0.003, paired t-test). A higher proportion of patients showing LVMi reduction was observed in the m-TOR group (53.3 versus 29.3%, P=0.048) at the study end. In addition, only 56% of the m-TOR patients had LVH at the study end compared to 77% of the control group (P=0.047). A significant change from baseline in deceleration time in early diastole was observed in the m-TOR group compared with the control group (P=0.019). CONCLUSIONS Switching from CNI to m-TOR therapy in non-diabetic KT patients may regress LVH, independently of blood pressure changes and follow-up time. This suggests a direct non-hemodynamic effect of m-TOR drugs on cardiac mass.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce several exact nonparametric tests for finite sample multivariatelinear regressions, and compare their powers. This fills an important gap inthe literature where the only known nonparametric tests are either asymptotic,or assume one covariate only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random coefficient regression models have been applied in differentfields and they constitute a unifying setup for many statisticalproblems. The nonparametric study of this model started with Beranand Hall (1992) and it has become a fruitful framework. In thispaper we propose and study statistics for testing a basic hypothesisconcerning this model: the constancy of coefficients. The asymptoticbehavior of the statistics is investigated and bootstrapapproximations are used in order to determine the critical values ofthe test statistics. A simulation study illustrates the performanceof the proposals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free-living energy expenditure (EE) was assessed in 37 young pregnant Gambian women at the 12th (n = 11, 53.5 +/- 1.7 kg), 24th (n = 14, 54.7 +/- 2.1 kg), and 36th (n = 12, 65.0 +/- 2.6 kg) wk of pregnancy and was compared with nonpregnant nonlactating (NPNL) control women (n = 12, 50.3 +/- 1.6 kg). The following two methods were used to assess EE: 1) the heart rate (HR) method using individual regression lines (HR vs EE) established at different activity levels in a respiration chamber and 2) the doubly labeled water (2H2(18)O) method in a subgroup of 25 pregnant and 7 control women. With the HR method the EE during the agricultural rainy season was found to be 2,408 +/- 87, 2,293 +/- 122, and 2,782 +/- 130 kcal/day at 12, 24, and 36 wk of gestation and were not significantly different from the control group (2,502 +/- 133 kcal/day). These findings were confirmed by the 2H2(18)O measurements, which failed to show any effect of pregnancy on EE. Expressed per unit body weight, the free-living EE was found to be lower (P less than 0.01 with 2H2(18)O method) at 36 wk of gestation than in the NPNL group. It is concluded that, in these Gambian women, energy-sparing mechanisms that contribute to meet the additional energy stress of gestation are operating during pregnancy (e.g., diminished spontaneous physical activity).