979 resultados para interval approach


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An appreciation of the quantity of streamflow derived from the main hydrological pathways involved in transporting diffuse contaminants is critical when addressing a wide range of water resource management issues. In order to assess hydrological pathway contributions to streams, it is necessary to provide feasible upper and lower bounds for flows in each pathway. An important first step in this process is to provide reliable estimates of the slower responding groundwater pathways and subsequently the quicker overland and interflow pathways. This paper investigates the effectiveness of a multi-faceted approach applying different hydrograph separation techniques, supplemented by lumped hydrological modelling, for calculating the Baseflow Index (BFI), for the development of an integrated approach to hydrograph separation. A semi-distributed, lumped and deterministic rainfall runoff model known as NAM has been applied to ten catchments (ranging from 5 to 699 km2). While this modelling approach is useful as a validation method, NAM itself is also an important tool for investigation. These separation techniques provide a large variation in BFI, a difference of 0.741 predicted for BFI in a catchment with the less reliable fixed and sliding interval methods and local minima turning point methods included. This variation is reduced to 0.167 with these methods omitted. The Boughton and Eckhardt algorithms, while quite subjective in their use, provide quick and easily implemented approaches for obtaining physically realistic hydrograph separations. It is observed that while the different separation techniques give varying BFI values for each of the catchments, a recharge coefficient approach developed in Ireland, when applied in conjunction with the Master recession Curve Tabulation method, predict estimates in agreement with those obtained using the NAM model, and these estimates are also consistent with the study catchments’ geology. These two separation methods, in conjunction with the NAM model, were selected to form an integrated approach to assessing BFI in catchments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Individuals carrying pathogenic mutations in the BRCA1 and BRCA2 genes have a high lifetime risk of breast cancer. BRCA1 and BRCA2 are involved in DNA double-strand break repair, DNA alterations that can be caused by exposure to reactive oxygen species, a main source of which are mitochondria. Mitochondrial genome variations affect electron transport chain efficiency and reactive oxygen species production. Individuals with different mitochondrial haplogroups differ in their metabolism and sensitivity to oxidative stress. Variability in mitochondrial genetic background can alter reactive oxygen species production, leading to cancer risk. In the present study, we tested the hypothesis that mitochondrial haplogroups modify breast cancer risk in BRCA1/2 mutation carriers.

Methods: We genotyped 22,214 (11,421 affected, 10,793 unaffected) mutation carriers belonging to the Consortium of Investigators of Modifiers of BRCA1/2 for 129 mitochondrial polymorphisms using the iCOGS array. Haplogroup inference and association detection were performed using a phylogenetic approach. ALTree was applied to explore the reference mitochondrial evolutionary tree and detect subclades enriched in affected or unaffected individuals.

Results: We discovered that subclade T1a1 was depleted in affected BRCA2 mutation carriers compared with the rest of clade T (hazard ratio (HR) = 0.55; 95% confidence interval (CI), 0.34 to 0.88; P = 0.01). Compared with the most frequent haplogroup in the general population (that is, H and T clades), the T1a1 haplogroup has a HR of 0.62 (95% CI, 0.40 to 0.95; P = 0.03). We also identified three potential susceptibility loci, including G13708A/rs28359178, which has demonstrated an inverse association with familial breast cancer risk.

Conclusions: This study illustrates how original approaches such as the phylogeny-based method we used can empower classical molecular epidemiological studies aimed at identifying association or risk modification effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the relationship between serum uric acid (SUA) and adiposity is well established, the direction of the causality is still unclear in the presence of conflicting evidences. We used a bidirectional Mendelian randomization approach to explore the nature and direction of causality between SUA and adiposity in a population-based study of Caucasians aged 35 to 75 years. We used, as instrumental variables, rs6855911 within the SUA gene SLC2A9 in one direction, and combinations of SNPs within the adiposity genes FTO, MC4R and TMEM18 in the other direction. Adiposity markers included weight, body mass index, waist circumference and fat mass. We applied a two-stage least squares regression: a regression of SUA/adiposity markers on our instruments in the first stage and a regression of the response of interest on the fitted values from the first stage regression in the second stage. SUA explained by the SLC2A9 instrument was not associated to fat mass (regression coefficient [95% confidence interval]: 0.05 [-0.10, 0.19] for fat mass) contrasting with the ordinary least square estimate (0.37 [0.34, 0.40]). By contrast, fat mass explained by genetic variants of the FTO, MC4R and TMEM18 genes was positively and significantly associated to SUA (0.31 [0.01, 0.62]), similar to the ordinary least square estimate (0.27 [0.25, 0.29]). Results were similar for the other adiposity markers. Using a bidirectional Mendelian randomization approach in adult Caucasians, our findings suggest that elevated SUA is a consequence rather than a cause of adiposity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compositional data, also called multiplicative ipsative data, are common in survey research instruments in areas such as time use, budget expenditure and social networks. Compositional data are usually expressed as proportions of a total, whose sum can only be 1. Owing to their constrained nature, statistical analysis in general, and estimation of measurement quality with a confirmatory factor analysis model for multitrait-multimethod (MTMM) designs in particular are challenging tasks. Compositional data are highly non-normal, as they range within the 0-1 interval. One component can only increase if some other(s) decrease, which results in spurious negative correlations among components which cannot be accounted for by the MTMM model parameters. In this article we show how researchers can use the correlated uniqueness model for MTMM designs in order to evaluate measurement quality of compositional indicators. We suggest using the additive log ratio transformation of the data, discuss several approaches to deal with zero components and explain how the interpretation of MTMM designs di ers from the application to standard unconstrained data. We show an illustration of the method on data of social network composition expressed in percentages of partner, family, friends and other members in which we conclude that the faceto-face collection mode is generally superior to the telephone mode, although primacy e ects are higher in the face-to-face mode. Compositions of strong ties (such as partner) are measured with higher quality than those of weaker ties (such as other network members)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents a new simple approach for combining empirical with raw (i.e., not bias corrected) coupled model ensemble forecasts in order to make more skillful interval forecasts of ENSO. A Bayesian normal model has been used to combine empirical and raw coupled model December SST Niño-3.4 index forecasts started at the end of the preceding July (5-month lead time). The empirical forecasts were obtained by linear regression between December and the preceding July Niño-3.4 index values over the period 1950–2001. Coupled model ensemble forecasts for the period 1987–99 were provided by ECMWF, as part of the Development of a European Multimodel Ensemble System for Seasonal to Interannual Prediction (DEMETER) project. Empirical and raw coupled model ensemble forecasts alone have similar mean absolute error forecast skill score, compared to climatological forecasts, of around 50% over the period 1987–99. The combined forecast gives an increased skill score of 74% and provides a well-calibrated and reliable estimate of forecast uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of methods of evaluating the validity of interval forecasts of financial data are analysed, and illustrated using intraday FTSE100 index futures returns. Some existing interval forecast evaluation techniques, such as the Markov chain approach of Christoffersen (1998), are shown to be inappropriate in the presence of periodic heteroscedasticity. Instead, we consider a regression-based test, and a modified version of Christoffersen's Markov chain test for independence, and analyse their properties when the financial time series exhibit periodic volatility. These approaches lead to different conclusions when interval forecasts of FTSE100 index futures returns generated by various GARCH(1,1) and periodic GARCH(1,1) models are evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A radiometric zircon age of 285.4 +/- 8.6 Ma (IDTIMS U-Pb) is reported from a tonstein layer interbedded with coal seams in the Faxinal coalfield, Rio Grande do Sul, Brazil. Calibration of palynostratigraphic data with the absolute age shows that the coal depositional interval in the southern Parana Basin is constrained to the Sakmarian. Consequently, the basal Gondwana sequence in the southern part of the basin should lie at the Carboniferous-Permian boundary, not within the Sakmarian as previously considered. The new results are significant for correlations between the Parana Basin and the Argentinian Paganzo Basin (302 +/- 6 Ma and 288 +/- 7 Ma) and with the Karoo Basin, specifically with the top of the Dwyka Tillite (302 +/- 3 Ma and 299.2 +/- 3.2 Ma) and the lowermost Ecca Group (288 +/- 3 Ma and 289.6 +/- 3.8 Ma). The evidence signifies widespread latest Carboniferous volcanic activity in western Gondwana. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis uses zonal travel cost method (ZTCM) to estimate consumer surplus of Peace & Love festival in Borlänge, Sweden. The study defines counties as zones of origin of the visitors. Visiting rates from each zone are estimated based on survey data. The study is novel due to the fact that mostly TCM has been applied in the environmental and recreational sector, not for short term events, like P&L festival. The analysis shows that travel cost has a significantly negative effect on visiting rate as expected. Even though income has previously shown to be significant in similar studies, it turns out to be insignificant in this study. A point estimate for the total consumer surplus of P&L festival is 35.6 million Swedish kronor. However, this point estimate is associated with high uncertainty since a 95 % confidence interval for it is (17.9, 53.2). It is also important to note that the estimated value only represents one part of the total economic value, the other values of the festival's totaleconomic value have not been estimated in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The gene encoding glycogen synthase in Neurospora crassa (gsn) is transcriptionally down-regulated when mycelium is exposed to a heat shock from 30 to 45 degrees C. The gsn promoter has one stress response element (STRE) motif that is specifically bound by heat shock activated nuclear proteins. In this work, we used biochemical approaches together with mass spectrometric analysis to identify the proteins that bind to the STRE motif and could participate in the gsn transcription regulation during heat shock. Crude nuclear extract of heat-shocked mycelium was prepared and fractionated by affinity chromatography. The fractions exhibiting DNA-binding activity were identified by electrophoretic mobility shift assay (EMSA) using as probe a DNA fragment containing the STRE motif DNA-protein binding activity was confirmed by Southwestern analysis. The molecular mass (MM) of proteins was estimated by fractionating the crude nuclear extract by SDS-PAGE followed by EMSA analysis of the proteins corresponding to different MM intervals. Binding activity was detected at the 30-50 MM kDa interval. Fractionation of the crude nuclear proteins by IEF followed by EMSA analysis led to the identification of two active fractions belonging to the pIs intervals 3.54-4.08 and 6.77-7.31. The proteins comprising the MM and pI intervals previously identified were excised from a 2-DE gel, and subjected to mass spectrometric analysis (MALDI-TOF/TOF) after tryptic digestion. The proteins were identified by search against the MIPS and MIT N. crassa databases and five promising candidates were identified. Their structural characteristics and putative roles in the gsn transcription regulation are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Systematic reviews are criticized for frequently offering inconsistent evidences and absence of straightforward recommendations. Their value seems to be depreciated when the conclusions are uncertain. To describe an alternative approach of evaluating case series studies in health care when there is absence of clinical trials. METHODS: We provide illustrations from recent experiences. Proportional meta-analysis was performed on surgical outcomes: (a) case series studies, (b) use of cryoablation or radiofrequency ablation, and (c) patients with small renal cell carcinoma. The statistically significant difference between both interventions studied was defined if their combined 95% confidential interval (CI) did not overlap. RESULTS: As demonstrated by the example, this analysis is an alternative approach to provide some evidence of the intervention´s effects under evaluation and plotting all available case series in the absence of clinical trials for the health field. CONCLUSIONS: Although we are leading to a low level of evidence to determine efficacy, effectiveness and safety of interventions this alternative approach can help surgeons, physicians and health professionals for a provisionally decision in health care along with their clinical expertise and the patient´s wishes and circumstances in the absence of high-quality primary studies. It´s not a replacement for the gold standard randomized clinical trial, but an alternative analysis for clinical research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The length of the post-partum anoestrous interval affects reproductive efficiency in many tropical beef cattle herds. In this study, results from genome-wide association studies (Experiment 1: GWAS) and gene expression (Experiment 2: microarray) were combined in a systems approach to reveal genetic markers, genes and pathways underlying the physiology of post-partum anoestrus in tropically adapted cattle. The microarray study measured the expression of 13,964 genes in the hypothalamus of Brahman cows. A total of 366 genes were differentially expressed (DE) in the post-partum period, when acyclic cows were compared to cows that had resumed ovarian cycles. Associated markers (P < 0.05) from a high density GWAS pointed to 2829 genes that were associated with post-partum anoestrous interval (PPAI) in two populations of beef cattle: Brahman and Tropical composite. Together the experiments provided evidence for 63 genes that are likely to influence the resumption of ovulation post-partum in tropically adapted beef cattle. Functional annotation analysis revealed that some of the 63 genes have known roles in hormonal activity, energy balance and neuronal synapse plasticity. Polymorphisms within candidate genes identified by this systems approach could have biological significance in post-partum anoestrus and help select Zebu (Bos indicus) influenced cattle with genetic potential for shorter post-partum anoestrus. Crown Copyright (C) 2014 Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes two new approaches for the sensitivity analysis of multiobjective design optimization problems whose performance functions are highly susceptible to small variations in the design variables and/or design environment parameters. In both methods, the less sensitive design alternatives are preferred over others during the multiobjective optimization process. While taking the first approach, the designer chooses the design variable and/or parameter that causes uncertainties. The designer then associates a robustness index with each design alternative and adds each index as an objective function in the optimization problem. For the second approach, the designer must know, a priori, the interval of variation in the design variables or in the design environment parameters, because the designer will be accepting the interval of variation in the objective functions. The second method does not require any law of probability distribution of uncontrollable variations. Finally, the authors give two illustrative examples to highlight the contributions of the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When estimating the effect of treatment on HIV using data from observational studies, standard methods may produce biased estimates due to the presence of time-dependent confounders. Such confounding can be present when a covariate, affected by past exposure, is both a predictor of the future exposure and the outcome. One example is the CD4 cell count, being a marker for disease progression for HIV patients, but also a marker for treatment initiation and influenced by treatment. Fitting a marginal structural model (MSM) using inverse probability weights is one way to give appropriate adjustment for this type of confounding. In this paper we study a simple and intuitive approach to estimate similar treatment effects, using observational data to mimic several randomized controlled trials. Each 'trial' is constructed based on individuals starting treatment in a certain time interval. An overall effect estimate for all such trials is found using composite likelihood inference. The method offers an alternative to the use of inverse probability of treatment weights, which is unstable in certain situations. The estimated parameter is not identical to the one of an MSM, it is conditioned on covariate values at the start of each mimicked trial. This allows the study of questions that are not that easily addressed fitting an MSM. The analysis can be performed as a stratified weighted Cox analysis on the joint data set of all the constructed trials, where each trial is one stratum. The model is applied to data from the Swiss HIV cohort study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In biostatistical applications interest often focuses on the estimation of the distribution of a time-until-event variable T. If one observes whether or not T exceeds an observed monitoring time at a random number of monitoring times, then the data structure is called interval censored data. We extend this data structure by allowing the presence of a possibly time-dependent covariate process that is observed until end of follow up. If one only assumes that the censoring mechanism satisfies coarsening at random, then, by the curve of dimensionality, typically no regular estimators will exist. To fight the curse of dimensionality we follow the approach of Robins and Rotnitzky (1992) by modeling parameters of the censoring mechanism. We model the right-censoring mechanism by modeling the hazard of the follow up time, conditional on T and the covariate process. For the monitoring mechanism we avoid modeling the joint distribution of the monitoring times by only modeling a univariate hazard of the pooled monitoring times, conditional on the follow up time, T, and the covariates process, which can be estimated by treating the pooled sample of monitoring times as i.i.d. In particular, it is assumed that the monitoring times and the right-censoring times only depend on T through the observed covariate process. We introduce inverse probability of censoring weighted (IPCW) estimator of the distribution of T and of smooth functionals thereof which are guaranteed to be consistent and asymptotically normal if we have available correctly specified semiparametric models for the two hazards of the censoring process. Furthermore, given such correctly specified models for these hazards of the censoring process, we propose a one-step estimator which will improve on the IPCW estimator if we correctly specify a lower-dimensional working model for the conditional distribution of T, given the covariate process, that remains consistent and asymptotically normal if this latter working model is misspecified. It is shown that the one-step estimator is efficient if each subject is at most monitored once and the working model contains the truth. In general, it is shown that the one-step estimator optimally uses the surrogate information if the working model contains the truth. It is not optimal in using the interval information provided by the current status indicators at the monitoring times, but simulations in Peterson, van der Laan (1997) show that the efficiency loss is small.