837 resultados para ROBUST ESTIMATES
Resumo:
Report produced by the The Department of Agriculture and Land Stewardship, Climatology Bureau. The Iowa Crops and Weather report released by the USDA National Agricultural Statistical Service.
Resumo:
OBJECTIVE: To assess the suitability of a hot-wire anemometer infant monitoring system (Florian, Acutronic Medical Systems AG, Hirzel, Switzerland) for measuring flow and tidal volume (Vt) proximal to the endotracheal tube during high-frequency oscillatory ventilation. DESIGN: In vitro model study. SETTING: Respiratory research laboratory. SUBJECT: In vitro lung model simulating moderate to severe respiratory distress. INTERVENTION: The lung model was ventilated with a SensorMedics 3100A ventilator. Vt was recorded from the monitor display (Vt-disp) and compared with the gold standard (Vt-adiab), which was calculated using the adiabatic gas equation from pressure changes inside the model. MEASUREMENTS AND MAIN RESULTS: A range of Vt (1-10 mL), frequencies (5-15 Hz), pressure amplitudes (10-90 cm H2O), inspiratory times (30% to 50%), and Fio2 (0.21-1.0) was used. Accuracy was determined by using modified Bland-Altman plots (95% limits of agreement). An exponential decrease in Vt was observed with increasing oscillatory frequency. Mean DeltaVt-disp was 0.6 mL (limits of agreement, -1.0 to 2.1) with a linear frequency dependence. Mean DeltaVt-disp was -0.2 mL (limits of agreement, -0.5 to 0.1) with increasing pressure amplitude and -0.2 mL (limits of agreement, -0.3 to -0.1) with increasing inspiratory time. Humidity and heating did not affect error, whereas increasing Fio2 from 0.21 to 1.0 increased mean error by 6.3% (+/-2.5%). CONCLUSIONS: The Florian infant hot-wire flowmeter and monitoring system provides reliable measurements of Vt at the airway opening during high-frequency oscillatory ventilation when employed at frequencies of 8-13 Hz. The bedside application could improve monitoring of patients receiving high-frequency oscillatory ventilation, favor a better understanding of the physiologic consequences of different high-frequency oscillatory ventilation strategies, and therefore optimize treatment.
Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.
Resumo:
BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.
Resumo:
In the past 20 years the theory of robust estimation has become an important topic of mathematical statistics. We discuss here some basic concepts of this theory with the help of simple examples. Furthermore we describe a subroutine library for the application of robust statistical procedures, which was developed with the support of the Swiss National Science Foundation.
Resumo:
PURPOSE: To investigate the ability of inversion recovery ON-resonant water suppression (IRON) in conjunction with P904 (superparamagnetic nanoparticles which consisting of a maghemite core coated with a low-molecular-weight amino-alcohol derivative of glucose) to perform steady-state equilibrium phase MR angiography (MRA) over a wide dose range. MATERIALS AND METHODS: Experiments were approved by the institutional animal care committee. Rabbits (n = 12) were imaged at baseline and serially after the administration of 10 incremental dosages of 0.57-5.7 mgFe/Kg P904. Conventional T1-weighted and IRON MRA were obtained on a clinical 1.5 Tesla (T) scanner to image the thoracic and abdominal aorta, and peripheral vessels. Contrast-to-noise ratios (CNR) and vessel sharpness were quantified. RESULTS: Using IRON MRA, CNR and vessel sharpness progressively increased with incremental dosages of the contrast agent P904, exhibiting constantly higher contrast values than T1 -weighted MRA over a very wide range of contrast agent doses (CNR of 18.8 ± 5.6 for IRON versus 11.1 ± 2.8 for T1 -weighted MRA at 1.71 mgFe/kg, P = 0.02 and 19.8 ± 5.9 for IRON versus -0.8 ± 1.4 for T1-weighted MRA at 3.99 mgFe/kg, P = 0.0002). Similar results were obtained for vessel sharpness in peripheral vessels, (Vessel sharpness of 46.76 ± 6.48% for IRON versus 33.20 ± 3.53% for T1-weighted MRA at 1.71 mgFe/Kg, P = 0.002, and of 48.66 ± 5.50% for IRON versus 19.00 ± 7.41% for T1-weighted MRA at 3.99 mgFe/Kg, P = 0.003). CONCLUSION: Our study suggests that quantitative CNR and vessel sharpness after the injection of P904 are consistently higher for IRON MRA when compared with conventional T1-weighted MRA. These findings apply for a wide range of contrast agent dosages.
Resumo:
Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.
Resumo:
Selostus: Ravihevosten jalostettavia ominaisuuksia kuvaavien kilpailumittojen perinnölliset tunnusluvut
Resumo:
BACKGROUND: Transient balanced steady-state free-precession (bSSFP) has shown substantial promise for noninvasive assessment of coronary arteries but its utilization at 3.0 T and above has been hampered by susceptibility to field inhomogeneities that degrade image quality. The purpose of this work was to refine, implement, and test a robust, practical single-breathhold bSSFP coronary MRA sequence at 3.0 T and to test the reproducibility of the technique. METHODS: A 3D, volume-targeted, high-resolution bSSFP sequence was implemented. Localized image-based shimming was performed to minimize inhomogeneities of both the static magnetic field and the radio frequency excitation field. Fifteen healthy volunteers and three patients with coronary artery disease underwent examination with the bSSFP sequence (scan time = 20.5 ± 2.0 seconds), and acquisitions were repeated in nine subjects. The images were quantitatively analyzed using a semi-automated software tool, and the repeatability and reproducibility of measurements were determined using regression analysis and intra-class correlation coefficient (ICC), in a blinded manner. RESULTS: The 3D bSSFP sequence provided uniform, high-quality depiction of coronary arteries (n = 20). The average visible vessel length of 100.5 ± 6.3 mm and sharpness of 55 ± 2% compared favorably with earlier reported navigator-gated bSSFP and gradient echo sequences at 3.0 T. Length measurements demonstrated a highly statistically significant degree of inter-observer (r = 0.994, ICC = 0.993), intra-observer (r = 0.894, ICC = 0.896), and inter-scan concordance (r = 0.980, ICC = 0.974). Furthermore, ICC values demonstrated excellent intra-observer, inter-observer, and inter-scan agreement for vessel diameter measurements (ICC = 0.987, 0.976, and 0.961, respectively), and vessel sharpness values (ICC = 0.989, 0.938, and 0.904, respectively). CONCLUSIONS: The 3D bSSFP acquisition, using a state-of-the-art MR scanner equipped with recently available technologies such as multi-transmit, 32-channel cardiac coil, and localized B0 and B1+ shimming, allows accelerated and reproducible multi-segment assessment of the major coronary arteries at 3.0 T in a single breathhold. This rapid sequence may be especially useful for functional imaging of the coronaries where the acquisition time is limited by the stress duration and in cases where low navigator-gating efficiency prohibits acquisition of a free breathing scan in a reasonable time period.
Resumo:
To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) < F(ST) is predicted under homogenizing selection. However, nonadditive effects can alter these predictions. Here, we investigate the influence of dominance on the relation between Q(ST) and F(ST) for neutral traits. Using analytical results and computer simulations, we show that dominance generally deflates Q(ST) relative to F(ST). Under inbreeding, the effect of dominance vanishes, and we show that for selfing species, a better estimate of Q(ST) is obtained from selfed families than from half-sib families. We also compare several sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.
Resumo:
RESUME L'objectif de cette étude est d'évaluer comment de jeunes médecins en formation perçoivent le risque cardiovasculaire de leurs patients hypertendus en se basant sur les recommandations médicales (« guidelines ») et sur leur jugement clinique. Il s'agit d'une étude transversale observationnelle effectuée à la Policlinique Médicale Universitaire de Lausanne (PMU). 200 patients hypertendus ont été inclus dans l'étude ainsi qu'un groupe contrôle de 50 patients non hypertendus présentant au moins un facteur de risque cardiovasculaire. Nous avons comparé le risque cardiovasculaire à 10 ans calculé par un programme informatique basé sur l'équation de Framingham. L'équation a été adaptée pour les médecins par l'OMS-ISH au risque perçu, estimé cliniquement par les médecins. Les résultats de notre étude ont montrés que les médecins sous-estiment le risque cardiovasculaire à 10 ans de leurs patients, comparé au risque calculé selon l'équation de Framingham. La concordance entre les deux méthodes était de 39% pour les patients hypertendus et de 30% pour le groupe contrôle de patients non hypertendus. La sous-estimation du risque. cardiovasculaire pour les patients hypertendus était corrélée au fait qu'ils avaient une tension artérielle systolique stabilisée inférieure a 140 mmHg (OR=2.1 [1.1 ;4.1]). En conclusion, les résultats de cette étude montrent que les jeunes médecins en formation ont souvent une perception incorrecte du risque cardiovasculaire de leurs patients, avec une tendance à sous-estimer ce risque. Toutefois le risque calculé pourrait aussi être légèrement surestimé lorsqu'on applique l'équation de Framingham à la population suisse. Pour mettre en pratique une évaluation systématique des facteurs de risque en médecine de premier recours, un accent plus grand devrait être mis sur l'enseignement de l'évaluation du risque cardiovasculaire ainsi que sur la mise en oeuvre de programme pour l'amélioration de la qualité.
Resumo:
In this paper we test for the hysteresis versus the natural rate hypothesis on the unemployment rates of the EU new members using unit root tests that account for the presence of level shifts. As a by product, the analysis proceeds to the estimation of a NAIRU measure from a univariate point of view. The paper also focuses on the precision of these NAIRU estimates studying the two sources of inaccuracy that derive from the break points estimation and the autoregressive parameters estimation. The results point to the existence of up to four structural breaks in the transition countries NAIRU that can be associated with institutional changes implementing market-oriented reforms. Moreover, the degree of persistence in unemployment varies dramatically among the individual countries depending on the stage reached in the transition process
Resumo:
This paper presents a new regional database on GDP in Spain for the years 1860, 1900, 1914 and 1930. Following Geary and Stark (2002), country level GDP estimates are allocated across Spanish provinces. The results are then compared with previous estimates. Further, this new evidence is used to analyze the evolution of regional inequality and convergence in the long run. According to the distribution dynamics approach suggested by Quah (1993, 1996) persistence appears as a main feature in the regional distribution of output. Therefore, in the long run no evidence of regional convergence in the Spanish economy is found.
Resumo:
In this paper we test for the hysteresis versus the natural rate hypothesis on the unemployment rates of the EU new members using unit root tests that account for the presence of level shifts. As a by product, the analysis proceeds to the estimation of a NAIRU measure from a univariate point of view. The paper also focuses on the precision of these NAIRU estimates studying the two sources of inaccuracy that derive from the break points estimation and the autoregressive parameters estimation. The results point to the existence of up to four structural breaks in the transition countries NAIRU that can be associated with institutional changes implementing market-oriented reforms. Moreover, the degree of persistence in unemployment varies dramatically among the individual countries depending on the stage reached in the transition process
Resumo:
Gammadelta T cells are implicated in host defense against microbes and tumors but their mode of function remains largely unresolved. Here, we have investigated the ability of activated human Vgamma9Vdelta2(+) T cells (termed gammadelta T-APCs) to cross-present microbial and tumor antigens to CD8(+) alphabeta T cells. Although this process is thought to be mediated best by DCs, adoptive transfer of ex vivo antigen-loaded, human DCs during immunotherapy of cancer patients has shown limited success. We report that gammadelta T-APCs take up and process soluble proteins and induce proliferation, target cell killing and cytokine production responses in antigen-experienced and naïve CD8(+) alphabeta T cells. Induction of APC functions in Vgamma9Vdelta2(+) T cells was accompanied by the up-regulation of costimulatory and MHC class I molecules. In contrast, the functional predominance of the immunoproteasome was a characteristic of gammadelta T cells irrespective of their state of activation. Gammadelta T-APCs were more efficient in antigen cross-presentation than monocyte-derived DCs, which is in contrast to the strong induction of CD4(+) alphabeta T cell responses by both types of APCs. Our study reveals unexpected properties of human gammadelta T-APCs in the induction of CD8(+) alphabeta T effector cells, and justifies their further exploration in immunotherapy research.
Resumo:
Chlorophyll determination with a portable chlorophyll meter can indicate the period of highest N demand of plants and whether sidedressing is required or not. In this sense, defining the optimal timing of N application to common bean is fundamental to increase N use efficiency, increase yields and reduce the cost of fertilization. The objectives of this study were to evaluate the efficiency of N sufficiency index (NSI) calculated based on the relative chlorophyll index (RCI) in leaves, measured with a portable chlorophyll meter, as an indicator of time of N sidedressing fertilization and to verify which NSI (90 and 95 %) value is the most appropriate to indicate the moment of N fertilization of common bean cultivar Perola. The experiment was carried out in the rainy and dry growing seasons of the agricultural year 2009/10 on a dystroferric Red Nitosol, in Botucatu, São Paulo State, Brazil. The experiment was arranged in a randomized complete block design with five treatments, consisting of N managements (M1: 200 kg ha-1 N (40 kg at sowing + 80 kg 15 days after emergence (DAE) + 80 kg 30 DAE); M2: 100 kg ha-1 N (20 kg at sowing + 40 kg 15 DAE + 40 kg 30 DAE); M3: 20 kg ha-1 N at sowing + 30 kg ha-1 when chlorophyll meter readings indicated NSI < 95 %; M4: 20 kg ha-1 N at sowing + 30 kg ha-1 N when chlorophyll meter readings indicated NSI < 90 % and, M5: control (without N application)) and four replications. The variables RCI, aboveground dry matter, total leaf N concentration, production components, grain yield, relative yield, and N use efficiency were evaluated. The RCI correlated with leaf N concentrations. By monitoring the RCI with the chlorophyll meter, the period of N sidedressing of common bean could be defined, improving N use efficiency and avoiding unnecessary N supply to common bean. The NSI 90 % of the reference area was more efficient to define the moment of N sidedressing of common bean, to increase N use efficiency.