867 resultados para Interval sampling
Resumo:
In medical follow-up studies, ordered bivariate survival data are frequently encountered when bivariate failure events are used as the outcomes to identify the progression of a disease. In cancer studies interest could be focused on bivariate failure times, for example, time from birth to cancer onset and time from cancer onset to death. This paper considers a sampling scheme where the first failure event (cancer onset) is identified within a calendar time interval, the time of the initiating event (birth) can be retrospectively confirmed, and the occurrence of the second event (death) is observed sub ject to right censoring. To analyze this type of bivariate failure time data, it is important to recognize the presence of bias arising due to interval sampling. In this paper, nonparametric and semiparametric methods are developed to analyze the bivariate survival data with interval sampling under stationary and semi-stationary conditions. Numerical studies demonstrate the proposed estimating approaches perform well with practical sample sizes in different simulated models. We apply the proposed methods to SEER ovarian cancer registry data for illustration of the methods and theory.
Resumo:
In this work we use Interval Mathematics to establish interval counterparts for the main tools used in digital signal processing. More specifically, the approach developed here is oriented to signals, systems, sampling, quantization, coding and Fourier transforms. A detailed study for some interval arithmetics which handle with complex numbers is provided; they are: complex interval arithmetic (or rectangular), circular complex arithmetic, and interval arithmetic for polar sectors. This lead us to investigate some properties that are relevant for the development of a theory of interval digital signal processing. It is shown that the sets IR and R(C) endowed with any correct arithmetic is not an algebraic field, meaning that those sets do not behave like real and complex numbers. An alternative to the notion of interval complex width is also provided and the Kulisch- Miranker order is used in order to write complex numbers in the interval form enabling operations on endpoints. The use of interval signals and systems is possible thanks to the representation of complex values into floating point systems. That is, if a number x 2 R is not representable in a floating point system F then it is mapped to an interval [x;x], such that x is the largest number in F which is smaller than x and x is the smallest one in F which is greater than x. This interval representation is the starting point for definitions like interval signals and systems which take real or complex values. It provides the extension for notions like: causality, stability, time invariance, homogeneity, additivity and linearity to interval systems. The process of quantization is extended to its interval counterpart. Thereafter the interval versions for: quantization levels, quantization error and encoded signal are provided. It is shown that the interval levels of quantization represent complex quantization levels and the classical quantization error ranges over the interval quantization error. An estimation for the interval quantization error and an interval version for Z-transform (and hence Fourier transform) is provided. Finally, the results of an Matlab implementation is given
Resumo:
A method was developed to measure porosity and dissolved interstitial silicate at millimeter intervals or less in a sediment core. In cores from Emerald Basin (Scotian Shelf), interstitial concentrations near the sediment surface did not drop rapidly to bottom-water concentrations as measured in bottle casts (28 µM) but remained as high as 166 µM in the upper 0.5 mm of sediment High rates of benthic silicate release were measured which could not be accounted for by interstitial concentration gradients or by ventilation of macro-invertebrate burrows. The silicate discontinuity observed between the sediments and water column suggests that a diffusive sublayer exists in a zone of viscous flow above the sediment surface. This is possible only if a surface reaction is primarily responsible for silicate release. By assuming a linear concentration gradient across this diffusive sublayer, the silicate release rates were used to estimate the thickness of the sublayer to be about 2 mm.
Resumo:
The Late Miocene-Early Pliocene paleoclimatic history has been evaluated for a deep drilled sediment sequence at Deep Sea Drilling Project Site 281 and a shallow water marine sediment sequence at Blind River, New Zealand, both of which lay within the Subantarctic water mass during the Late Miocene. A major, faunally determined, cooling event within the latest Miocene at Site 281 and Blind River coincides with oxygen isotopic changes in benthonic foraminiferal composition at DSDP Site 284 considered by Shackleton and Kennett (1975) to indicate a significant increase in Antarctic ice sheet volume. However, at Site 281 benthonic foraminiferal oxygen isotopic changes do not record such a large increase in Antarctic ice volume. It is possible that the critical interval is within an unsampled section (no recovery) in the latest Miocene. Two benthonic oxygen isotopic events in the Late Miocene (0.5 ? and 1 ? in the light direction) may be useful as time-stratigraphic markers. A permanent, negative, carbon isotopic shift at both Site 281 and Blind River allows precise correlations to be made between the two sections and to other sites in the Pacific region. Close interval sampling below the carbon shift at Site 281 revealed dramatic fluctuations in surface-water temperatures prior to a latest Miocene interval of refrigeration (Kapitean) and a strong pulse of dissolution between 6.6 and 6.2 +/- 0.1 m.y. which may be related to a fundamental geochemical change in the oceans at the time of the carbon shift (6.3-6.2 m.y.). No similar close interval sampling at Blind River was possible because of a lack of outcrop over the critical interval. Paleoclimatic histories from the two sections are very similar. Surface water temperatures and Antarctic ice-cap volume appear to have been relatively stable during the late Middle-early Late Miocene (early-late Tongaporutuan). By 6.4 m.y. cooler conditions prevailed at Site 281. Between 6.3 and 6.2 -+ 0.1 m.y. the carbon isotopic shift occurred followed, within 100,000 yr, by a distinct shallowing of water depths at Blind River. The earliest Pliocene (Opoitian) is marked by increasing surface-water temperatures.
Resumo:
Estimates of larval supply can provide information on year-class strength that is useful for fisheries management. However, larval supply is difficult to monitor because long-term, high-frequency sampling is needed. The purpose of this study was to subsample an 11-year record of daily larval supply of blue crab (Callinectes sapidus) to determine the effect of sampling interval on variability in estimates of supply. The coefficient of variation in estimates of supply varied by 0.39 among years at a 2-day sampling interval and 0.84 at a 7-day sampling interval. For 8 of the 11 years, there was a significant correlation between mean daily larval supply and lagged fishery catch per trip (coefficient of correlation [r]=0.88). When these 8 years were subsampled, a 2-day sampling interval yielded a significant correlation with fishery data only 64.5% of the time and a 3-day sampling interval never yielded a significant correlation. Therefore, high-frequency sampling (daily or every other day) may be needed to characterize interannual variability in larval supply.
Resumo:
We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).
Resumo:
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models.
Resumo:
Standard approaches for ellipse fitting are based on the minimization of algebraic or geometric distance between the given data and a template ellipse. When the data are noisy and come from a partial ellipse, the state-of-the-art methods tend to produce biased ellipses. We rely on the sampling structure of the underlying signal and show that the x- and y-coordinate functions of an ellipse are finite-rate-of-innovation (FRI) signals, and that their parameters are estimable from partial data. We consider both uniform and nonuniform sampling scenarios in the presence of noise and show that the data can be modeled as a sum of random amplitude-modulated complex exponentials. A low-pass filter is used to suppress noise and approximate the data as a sum of weighted complex exponentials. The annihilating filter used in FRI approaches is applied to estimate the sampling interval in the closed form. We perform experiments on simulated and real data, and assess both objective and subjective performances in comparison with the state-of-the-art ellipse fitting methods. The proposed method produces ellipses with lesser bias. Furthermore, the mean-squared error is lesser by about 2 to 10 dB. We show the applications of ellipse fitting in iris images starting from partial edge contours, and to free-hand ellipses drawn on a touch-screen tablet.