852 resultados para Initial data problem
Resumo:
In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.
Resumo:
OBJECTIVE : To describe the methodology and to present the baseline findings of the Attention-deficit/hyperactivity Disorder Observational Research in Europe (ADORE) study, the primary objective of which is to describe the relationship between treatment regimen prescribed and quality of life of children with ADHD in actual practice. METHODS : In this 2-year prospective observational study, data on diagnosis, prescribed treatment and outcomes of ADHD were collected at seven time points by paediatricians and child psychiatrists on 1,573 children recruited in 10 European countries. The data presented here from the 1,478 patients included in the analyses describe the baseline condition, initial treatment regimen prescribed and quality of life of families with children with ADHD. RESULTS : Patients had a mean age of 9.0 years (SD 2.5) and 84% were male. Physicians diagnoses were made using DSM-IV (43 %), ICD-10 (32%) and both DSM-IV and ICD-10 (12 %). Mean age of awareness of a problem was 5.1 years, suggesting an average delay of approximately 4 years between awareness and diagnosis of ADHD. Baseline ADHD rating scale scores (physicianrated) indicated moderate to severe ADHD. Parent-rated SDQ scores were in agreement and suggested significant levels of co-existing problems. CGI-S, CGAS and CHIPCE scores also indicated significant impairment. Patients were offered the following treatments after the initial assessment: pharmacotherapy (25 %), psychotherapy (19 %), combination of pharmacotherapy and psychotherapy (25 %), other therapy (10 %) and no treatment (21 %). CONCLUSION : The ADORE study shows that ADHD is similarly recognised across 10 European countries and that the children are significantly impaired across a wide range of domains. In this respect, they resemble children described in previous ADHD samples.
Resumo:
Studies of chronic life-threatening diseases often involve both mortality and morbidity. In observational studies, the data may also be subject to administrative left truncation and right censoring. Since mortality and morbidity may be correlated and mortality may censor morbidity, the Lynden-Bell estimator for left truncated and right censored data may be biased for estimating the marginal survival function of the non-terminal event. We propose a semiparametric estimator for this survival function based on a joint model for the two time-to-event variables, which utilizes the gamma frailty specification in the region of the observable data. Firstly, we develop a novel estimator for the gamma frailty parameter under left truncation. Using this estimator, we then derive a closed form estimator for the marginal distribution of the non-terminal event. The large sample properties of the estimators are established via asymptotic theory. The methodology performs well with moderate sample sizes, both in simulations and in an analysis of data from a diabetes registry.
Resumo:
Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.
Resumo:
Data from 59 farms with complaints of udder health problems and insufficient quality of delivered milk that had been assessed by the Swiss Bovine Health Service (BHS) between 1999 and 2004 were retrospectively analysed. Data evaluated included farm characteristics such as farm size, herd size, average milk yield, milking system and housing system, deficits of the milking equipment and the milking practices, and bacteriological results of milk samples from all cows in lactation. The average size of the farms assessed by the BHS was larger than the size of the were evaluated, 42 showed obvious failures which the farm managers could have noticed. Only 5 of the 57 milkers carried out their work according to the generally valid guidelines of the National Mastitis Council. More than 2 basic mistakes were observed in the milking practices of 36 milkers. In 51 farms, mixed infections with several problem bacteria (those present in at least 20 % of the tested cows on a farm) were found. Staphylococcus aureus proved to be the most common problem germ. As the bacteria responsible for the herd problem (the sole problem bacteria detectable on a particular farm) Staphylococcus aureus was detected in 4 farms. The current study revealed that education in the area of milking techniques and milking practices of farmers should be improved in order to reduce the incidence of udder health problems on herd level. Staphylococcus aureus is the most important problem bacteria involved in herds with udder health problems in Switzerland. Staphylococcus aureus might be used in practice as the indicator germ for early recognition of management problems in dairy farms.
Resumo:
Stem cells of various tissues are typically defined as multipotent cells with 'self-renewal' properties. Despite the increasing interest in stem cells, surprisingly little is known about the number of times stem cells can or do divide over a lifetime. Based on telomere-length measurements of hematopoietic cells, we previously proposed that the self-renewal capacity of hematopoietic stem cells is limited by progressive telomere attrition and that such cells divide very rapidly during the first year of life. Recent studies of patients with aplastic anemia resulting from inherited mutations in telomerase genes support the notion that the replicative potential of hematopoietic stem cells is directly related to telomere length, which is indirectly related to telomerase levels. To revisit conclusions about stem cell turnover based on cross-sectional studies of telomere length, we performed a longitudinal study of telomere length in leukocytes from newborn baboons. All four individual animals studied showed a rapid decline in telomere length (approximately 2-3 kb) in granulocytes and lymphocytes in the first year after birth. After 50-70 weeks the telomere length appeared to stabilize in all cell types. These observations suggest that hematopoietic stem cells, after an initial phase of rapid expansion, switch at around 1 year of age to a different functional mode characterized by a markedly decreased turnover rate.
Resumo:
Recent developments in clinical radiology have resulted in additional developments in the field of forensic radiology. After implementation of cross-sectional radiology and optical surface documentation in forensic medicine, difficulties in the validation and analysis of the acquired data was experienced. To address this problem and for the comparison of autopsy and radiological data a centralized database with internet technology for forensic cases was created. The main goals of the database are (1) creation of a digital and standardized documentation tool for forensic-radiological and pathological findings; (2) establishing a basis for validation of forensic cross-sectional radiology as a non-invasive examination method in forensic medicine that means comparing and evaluating the radiological and autopsy data and analyzing the accuracy of such data; and (3) providing a conduit for continuing research and education in forensic medicine. Considering the infrequent availability of CT or MRI for forensic institutions and the heterogeneous nature of case material in forensic medicine an evaluation of benefits and limitations of cross-sectional imaging concerning certain forensic features by a single institution may be of limited value. A centralized database permitting international forensic and cross disciplinary collaborations may provide important support for forensic-radiological casework and research.
Resumo:
One limitation to the widespread implementation of Monte Carlo (MC) patient dose-calculation algorithms for radiotherapy is the lack of a general and accurate source model of the accelerator radiation source. Our aim in this work is to investigate the sensitivity of the photon-beam subsource distributions in a MC source model (with target, primary collimator, and flattening filter photon subsources and an electron subsource) for 6- and 18-MV photon beams when the energy and radial distributions of initial electrons striking a linac target change. For this purpose, phase-space data (PSD) was calculated for various mean electron energies striking the target, various normally distributed electron energy spread, and various normally distributed electron radial intensity distributions. All PSD was analyzed in terms of energy, fluence, and energy fluence distributions, which were compared between the different parameter sets. The energy spread was found to have a negligible influence on the subsource distributions. The mean energy and radial intensity significantly changed the target subsource distribution shapes and intensities. For the primary collimator and flattening filter subsources, the distribution shapes of the fluence and energy fluence changed little for different mean electron energies striking the target, however, their relative intensity compared with the target subsource change, which can be accounted for by a scaling factor. This study indicates that adjustments to MC source models can likely be limited to adjusting the target subsource in conjunction with scaling the relative intensity and energy spectrum of the primary collimator, flattening filter, and electron subsources when the energy and radial distributions of the initial electron-beam change.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.
Resumo:
PURPOSE: To prospectively determine quantitatively and qualitatively the timing of maximal enhancement of the normal small-bowel wall by using contrast material-enhanced multi-detector row computed tomography (CT). MATERIALS AND METHODS: This HIPAA-compliant study was approved by the institutional review board. After information on radiation risk was given, written informed consent was obtained from 25 participants with no history of small-bowel disease (mean age, 58 years; 19 men) who had undergone single-level dynamic CT. Thirty seconds after the intravenous administration of contrast material, a serial dynamic acquisition, consisting of 10 images obtained 5 seconds apart, was performed. Enhancement measurements were obtained over time from the small-bowel wall and the aorta. Three independent readers qualitatively assessed small-bowel conspicuity. Quantitative and qualitative data were analyzed during the arterial phase, the enteric phase (which represented peak small-bowel mural enhancement), and the venous phase. Statistical analysis included paired Student t test and Wilcoxon signed rank test with Bonferroni correction. A P value less than .05 was used to indicate a significant difference. RESULTS: The mean time to peak enhancement of the small-bowel wall was 49.3 seconds +/- 7.7 (standard deviation) and 13.5 seconds +/- 7.6 after peak aortic enhancement. Enhancement values were highest during the enteric phase (P < .05). Regarding small-bowel conspicuity, images obtained during the enteric phase were most preferred qualitatively; there was a significant difference between the enteric and arterial phases (P < .001) but not between the enteric and venous phases (P = .18). CONCLUSION: At multi-detector row CT, peak mural enhancement of the normal small bowel occurs on average about 50 seconds after intravenous administration of contrast material or 14 seconds after peak aortic enhancement.
Resumo:
This study’s objective was to answer three research questions related to students’ knowledge and attitudes about water quality and availability issues. It is important to understand what knowledge students have about environmental problems such as these, because today’s students will become the problem solvers of the future. If environmental problems, such as those related to water quality, are ever going to be solved, students must be environmentally literate. Several methods of data collection were used. Surveys were given to both Bolivian and Jackson High School students in order to comparison their initial knowledge and attitudes about water quality issues. To study the effects of instruction, a unit of instruction about water quality issues was then taught to the Jackson High School students to see what impact it would have on their knowledge. In addition, the learning of two different groups of Jackson High School students was compared—one group of general education students and a second group of students that were learning in an inclusion classroom and included special education students and struggling learners form the general education population. Student and teacher journals, a unit test, and postsurvey responses were included in the data set. Results suggested that when comparing Bolivian students and Jackson High School students, Jackson High School students were more knowledgeable concerning clean water infrastructure and its importance, despite the fact that these issues were less relevant to their lives than for their Bolivian counterparts. Although overall, the data suggested that all the Jackson High students showed evidence that the instruction impacted their knowledge, the advanced Biology students appeared to show stronger gains than their peers in an inclusion classroom.
Resumo:
PURPOSE: To describe the implementation and use of an electronic patient-referral system as an aid to the efficient referral of patients to a remote and specialized treatment center. METHODS AND MATERIALS: A system for the exchange of radiotherapy data between different commercial planning systems and a specially developed planning system for proton therapy has been developed through the use of the PAPYRUS diagnostic image standard as an intermediate format. To ensure the cooperation of the different TPS manufacturers, the number of data sets defined for transfer has been restricted to the three core data sets of CT, VOIs, and three-dimensional dose distributions. As a complement to the exchange of data, network-wide application-sharing (video-conferencing) technologies have been adopted to provide methods for the interactive discussion and assessment of treatments plans with one or more partner clinics. RESULTS: Through the use of evaluation plans based on the exchanged data, referring clinics can accurately assess the advantages offered by proton therapy on a patient-by-patient basis, while the practicality or otherwise of the proposed treatments can simultaneously be assessed by the proton therapy center. Such a system, along with the interactive capabilities provided by video-conferencing methods, has been found to be an efficient solution to the problem of patient assessment and selection at a specialized treatment center, and is a necessary first step toward the full electronic integration of such centers with their remotely situated referral centers.
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
INTRODUCTION: Natural orifice transluminal endoscopic surgery (NOTES) is a multidisciplinary surgical technique. If conventional endoscopic instrumentation can be easily mastered, surgeons with laparoscopic experience could head NOTES interventions. MATERIALS AND METHODS: Thirty individuals were tested for endoscopic dexterity. Group 1 included seven gastroenterologists, group 2 included 12 laparoscopically experienced surgeons lacking endoscopic experience, and group 3 included 11 interns who had no hands-on endoscopic or surgical experience. Each individual repeated an easy (T1), medium (T2), and difficult (T3) task ten times with endoscopic equipment on a NOTES skills-box. RESULTS: Group 3 had significantly poorer performances for all three tasks compared to the other groups. No significant differences were seen between groups 1 and 2 for T1 and T2. The initial T3 performance of group 1 was better than that of group 2, but their performance after repetition was not statistically different. Groups 2 and 3 improved significantly with repetition, and group 2 eventually performed as well as group 1. CONCLUSIONS: The data indicate that laparoscopic surgeons quickly learned to handle the endoscopic equipment. This suggests that a lack of endoscopic experience does not handicap laparoscopic surgeons when performing endoscopic tasks. Based on their knowledge of anatomy and the complication management acquired during surgical education, surgeons are well equipped to take the lead in interdisciplinary NOTES collaborations.
Resumo:
An important problem in unsupervised data clustering is how to determine the number of clusters. Here we investigate how this can be achieved in an automated way by using interrelation matrices of multivariate time series. Two nonparametric and purely data driven algorithms are expounded and compared. The first exploits the eigenvalue spectra of surrogate data, while the second employs the eigenvector components of the interrelation matrix. Compared to the first algorithm, the second approach is computationally faster and not limited to linear interrelation measures.