935 resultados para INTERVAL ESTIMATION
Resumo:
Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs.
Resumo:
Haldane (1935) developed a method for estimating the male-to-female ratio of mutation rate ($\alpha$) by using sex-linked recessive genetic disease, but in six different studies using hemophilia A data the estimates of $\alpha$ varied from 1.2 to 29.3. Direct genomic sequencing is a better approach, but it is laborious and not readily applicable to non-human organisms. To study the sex ratios of mutation rate in various mammals, I used an indirect method proposed by Miyata et al. (1987). This method takes advantage of the fact that different chromosomes segregate differently between males and females, and uses the ratios of mutation rate in sequences on different chromosomes to estimate the male-to-female ratio of mutation rate. I sequenced the last intron of ZFX and ZFY genes in 6 species of primates and 2 species of rodents; I also sequenced the partial genomic sequence of the Ube1x and Ube1y genes of mice and rats. The purposes of my study in addition to estimation of $\alpha$'s in different mammalian species, are to test the hypothesis that most mutations are replication dependent and to examine the generation-time effect on $\alpha$. The $\alpha$ value estimated from the ZFX and ZFY introns of the six primate specise is ${\sim}$6. This estimate is the same as an earlier estimate using only 4 species of primates, but the 95% confidence interval has been reduced from (2, 84) to (2, 33). The estimate of $\alpha$ in the rodents obtained from Zfx and Zfy introns is ${\sim}$1.9, and that deriving from Ube1x and Ube1y introns is ${\sim}$2. Both estimates have a 95% confidence interval from 1 to 3. These two estimates are very close to each other, but are only one-third of that of the primates, suggesting a generation-time effect on $\alpha$. An $\alpha$ of 6 in primates and 2 in rodents are close to the estimates of the male-to-female ratio of the number of germ-cell divisions per generation in humans and mice, which are 6 and 2, respectively, assuming the generation time in humans is 20 years and that in mice is 5 months. These findings suggest that errors during germ-cell DNA replication are the primary source of mutation and that $\alpha$ decreases with decreasing length of generation time. ^
Resumo:
We present an application and sample independent method for the automatic discrimination of noise and signal in optical coherence tomography Bscans. The proposed algorithm models the observed noise probabilistically and allows for a dynamic determination of image noise parameters and the choice of appropriate image rendering parameters. This overcomes the observer variability and the need for a priori information about the content of sample images, both of which are challenging to estimate systematically with current systems. As such, our approach has the advantage of automatically determining crucial parameters for evaluating rendered image quality in a systematic and task independent way. We tested our algorithm on data from four different biological and nonbiological samples (index finger, lemon slices, sticky tape, and detector cards) acquired with three different experimental spectral domain optical coherence tomography (OCT) measurement systems including a swept source OCT. The results are compared to parameters determined manually by four experienced OCT users. Overall, our algorithm works reliably regardless of which system and sample are used and estimates noise parameters in all cases within the confidence interval of those found by observers.
Resumo:
BACKGROUND Fetal weight estimation (FWE) is an important factor for clinical management decisions, especially in imminent preterm birth at the limit of viability between 23(0/7) and 26(0/7) weeks of gestation. It is crucial to detect and eliminate factors that have a negative impact on the accuracy of FWE. DATA SOURCES In this systematic literature review, we investigated 14 factors that may influence the accuracy of FWE, in particular in preterm neonates born at the limit of viability. RESULTS We found that gestational age, maternal body mass index, amniotic fluid index and ruptured membranes, presentation of the fetus, location of the placenta and the presence of multiple fetuses do not seem to have an impact on FWE accuracy. The influence of the examiner's grade of experience and that of fetal gender were discussed controversially. Fetal weight, time interval between estimation and delivery and the use of different formulas seem to have an evident effect on FWE accuracy. No results were obtained on the impact of active labor. DISCUSSION This review reveals that only few studies investigated factors possibly influencing the accuracy of FWE in preterm neonates at the limit of viability. Further research in this specific age group on potential confounding factors is needed.
Resumo:
We use a multiproxy approach to monitor changes in the vertical profile of the Indonesian Throughflow as well as monsoonal wind and precipitation patterns in the Timor Sea on glacial-interglacial, precessional, and suborbital timescales. We focus on an interval of extreme climate change and sea level variation: marine isotope (MIS) 6 to MIS 5e. Paleoproductivity fluctuations in the Timor Sea follow a precessional beat related to the intensity of the Australian (NW) monsoon. Paired Mg/Ca and d18O measurements of surface- and thermocline-dwelling planktonic foraminifers (G. ruber and P. obliquiloculata) indicate an increase of >4°C in both surface and thermocline water temperatures during Termination II. Tropical sea surface temperature changed synchronously with ice volume (benthic d18O) during deglaciation, implying a direct coupling of high- and low-latitude climate via atmospheric and/or upper ocean circulation. Substantial cooling and freshening of thermocline waters occurred toward the end of Termination II and during MIS 5e, indicating a change in the vertical profile of the Indonesian Throughflow from surface- to thermocline-dominated flow.
Resumo:
This paper studies feature subset selection in classification using a multiobjective estimation of distribution algorithm. We consider six functions, namely area under ROC curve, sensitivity, specificity, precision, F1 measure and Brier score, for evaluation of feature subsets and as the objectives of the problem. One of the characteristics of these objective functions is the existence of noise in their values that should be appropriately handled during optimization. Our proposed algorithm consists of two major techniques which are specially designed for the feature subset selection problem. The first one is a solution ranking method based on interval values to handle the noise in the objectives of this problem. The second one is a model estimation method for learning a joint probabilistic model of objectives and variables which is used to generate new solutions and advance through the search space. To simplify model estimation, l1 regularized regression is used to select a subset of problem variables before model learning. The proposed algorithm is compared with a well-known ranking method for interval-valued objectives and a standard multiobjective genetic algorithm. Particularly, the effects of the two new techniques are experimentally investigated. The experimental results show that the proposed algorithm is able to obtain comparable or better performance on the tested datasets.
Resumo:
As one of the most competitive approaches to multi-objective optimization, evolutionary algorithms have been shown to obtain very good results for many realworld multi-objective problems. One of the issues that can affect the performance of these algorithms is the uncertainty in the quality of the solutions which is usually represented with the noise in the objective values. Therefore, handling noisy objectives in evolutionary multi-objective optimization algorithms becomes very important and is gaining more attention in recent years. In this paper we present ?-degree Pareto dominance relation for ordering the solutions in multi-objective optimization when the values of the objective functions are given as intervals. Based on this dominance relation, we propose an adaptation of the non-dominated sorting algorithm for ranking the solutions. This ranking method is then used in a standardmulti-objective evolutionary algorithm and a recently proposed novel multi-objective estimation of distribution algorithm based on joint variable-objective probabilistic modeling, and applied to a set of multi-objective problems with different levels of independent noise. The experimental results show that the use of the proposed method for solution ranking allows to approximate Pareto sets which are considerably better than those obtained when using the dominance probability-based ranking method, which is one of the main methods for noise handling in multi-objective optimization.
Resumo:
In the deep-sea, the Paleocene-Eocene Thermal Maximum (PETM) is often marked by clay-rich condensed intervals caused by dissolution of carbonate sediments, capped by a carbonate-rich interval. Constraining the duration of both the dissolution and subsequent cap-carbonate intervals is essential to computing marine carbon fluxes and thus testing hypotheses for the origin of this event. To this end, we provide new high-resolution helium isotope records spanning the Paleocene-Eocene boundary at ODP Site 1266 in the South Atlantic. The extraterrestrial 3He, 3HeET, concentrations replicate trends observed at ODP Site 690 by Farley and Eltgroth (2003, doi:10.1016/S0012-821X(03)00017-7). By assuming a constant flux of 3HeET we constrain relative changes in accumulation rates of sediment across the PETM and construct a new age model for the event. In this new chronology the zero carbonate layer represents 35 kyr, some of which reflects clay produced by dissolution of Paleocene (pre-PETM) sediments. Above this layer, carbonate concentrations increase for ~165 kyr and remain higher than in the latest Paleocene until 234 +48/-34 kyr above the base of the clay. The new chronology indicates that minimum d13C values persisted for a maximum of 134 +27/-19 kyr and the inflection point previously chosen to designate the end of the CIE recovery occurs at 217 +44/-31 kyr. This allocation of time differs from that of the cycle-based age model of Röhl et al. (2007, doi:10.1029/2007GC001784) in that it assigns more time to the clay layer followed by a more gradual recovery of carbonate-rich sedimentation. The new model also suggests a longer sustained d13C excursion followed by a more rapid recovery to pre-PETM d13C values. These differences have important implications for constraining the source(s) of carbon and mechanisms for its subsequent sequestration, favoring models that include a sustained release
Resumo:
Background: Sentinel node biopsy (SNB) is being increasingly used but its place outside randomized trials has not yet been established. Methods: The first 114 sentinel node (SN) biopsies performed for breast cancer at the Princess Alexandra Hospital from March 1999 to June 2001 are presented. In 111 cases axillary dissection was also performed, allowing the accuracy of the technique to be assessed. A standard combination of preoperative lymphoscintigraphy, intraoperative gamma probe and injection of blue dye was used in most cases. Results are discussed in relation to the risk and potential consequences of understaging. Results: Where both probe and dye were used, the SN was identified in 90% of patients. A significant number of patients were treated in two stages and the technique was no less effective in patients who had SNB performed at a second operation after the primary tumour had already been removed. The interval from radioisotope injection to operation was very wide (between 2 and 22 h) and did not affect the outcome. Nodal metastases were present in 42 patients in whom an SN was found, and in 40 of these the SN was positive, giving a false negative rate of 4.8% (2/42), with the overall percentage of patients understaged being 2%. For this particular group as a whole, the increased risk of death due to systemic therapy being withheld as a consequence of understaging (if SNB alone had been employed) is estimated at less than 1/500. The risk for individuals will vary depending on other features of the particular primary tumour. Conclusion: For patients who elect to have the axilla staged using SNB alone, the risk and consequences of understaging need to be discussed. These risks can be estimated by allowing for the specific surgeon's false negative rate for the technique, and considering the likelihood of nodal metastases for a given tumour. There appears to be no disadvantage with performing SNB at a second operation after the primary tumour has already been removed. Clearly, for a large number of patients, SNB alone will be safe, but ideally participation in randomized trials should continue to be encouraged.
Resumo:
In various signal-channel-estimation problems, the channel being estimated may be well approximated by a discrete finite impulse response (FIR) model with sparsely separated active or nonzero taps. A common approach to estimating such channels involves a discrete normalized least-mean-square (NLMS) adaptive FIR filter, every tap of which is adapted at each sample interval. Such an approach suffers from slow convergence rates and poor tracking when the required FIR filter is "long." Recently, NLMS-based algorithms have been proposed that employ least-squares-based structural detection techniques to exploit possible sparse channel structure and subsequently provide improved estimation performance. However, these algorithms perform poorly when there is a large dynamic range amongst the active taps. In this paper, we propose two modifications to the previous algorithms, which essentially remove this limitation. The modifications also significantly improve the applicability of the detection technique to structurally time varying channels. Importantly, for sparse channels, the computational cost of the newly proposed detection-guided NLMS estimator is only marginally greater than that of the standard NLMS estimator. Simulations demonstrate the favourable performance of the newly proposed algorithm. © 2006 IEEE.
Resumo:
We experimentally investigate the channel estimation and compensation in a chromatic dispersion (CD) limited 20Gbit/s optical fast orthogonal frequency division multiplexing (F-OFDM) system with up to 840km transmission. It is shown that symmetric extension based guard interval (GI) is required to enable CD compensation using one-tap equalizers. As few as one optical F-OFDM symbol with four and six pilot tones per symbol can achieve near-optimal channel estimation and compensation performance for 600km and 840km respectively.
Resumo:
We experimentally investigate the channel estimation and compensation in a chromatic dispersion (CD) limited 20Gbit/s optical fast orthogonal frequency division multiplexing (F-OFDM) system with up to 840km transmission. It is shown that symmetric extension based guard interval (GI) is required to enable CD compensation using one-tap equalizers. As few as one optical F-OFDM symbol with four and six pilot tones per symbol can achieve near-optimal channel estimation and compensation performance for 600km and 840km respectively.
Resumo:
2000 Mathematics Subject Classification: Primary 60G55; secondary 60G25.
Resumo:
Threshold estimation with sequential procedures is justifiable on the surmise that the index used in the so-called dynamic stopping rule has diagnostic value for identifying when an accurate estimate has been obtained. The performance of five types of Bayesian sequential procedure was compared here to that of an analogous fixed-length procedure. Indices for use in sequential procedures were: (1) the width of the Bayesian probability interval, (2) the posterior standard deviation, (3) the absolute change, (4) the average change, and (5) the number of sign fluctuations. A simulation study was carried out to evaluate which index renders estimates with less bias and smaller standard error at lower cost (i.e. lower average number of trials to completion), in both yes–no and two-alternative forced-choice (2AFC) tasks. We also considered the effect of the form and parameters of the psychometric function and its similarity with themodel function assumed in the procedure. Our results show that sequential procedures do not outperform fixed-length procedures in yes–no tasks. However, in 2AFC tasks, sequential procedures not based on sign fluctuations all yield minimally better estimates than fixed-length procedures, although most of the improvement occurs with short runs that render undependable estimates and the differences vanish when the procedures run for a number of trials (around 70) that ensures dependability. Thus, none of the indices considered here (some of which are widespread) has the diagnostic value that would justify its use. In addition, difficulties of implementation make sequential procedures unfit as alternatives to fixed-length procedures.
Resumo:
The standard difference model of two-alternative forced-choice (2AFC) tasks implies that performance should be the same when the target is presented in the first or the second interval. Empirical data often show “interval bias” in that percentage correct differs significantly when the signal is presented in the first or the second interval. We present an extension of the standard difference model that accounts for interval bias by incorporating an indifference zone around the null value of the decision variable. Analytical predictions are derived which reveal how interval bias may occur when data generated by the guessing model are analyzed as prescribed by the standard difference model. Parameter estimation methods and goodness-of-fit testing approaches for the guessing model are also developed and presented. A simulation study is included whose results show that the parameters of the guessing model can be estimated accurately. Finally, the guessing model is tested empirically in a 2AFC detection procedure in which guesses were explicitly recorded. The results support the guessing model and indicate that interval bias is not observed when guesses are separated out.