970 resultados para Setup errors
Resumo:
PURPOSE The range of patient setup errors in six dimensions detected in clinical routine for cranial as well as for extracranial treatments, were analyzed while performing linear accelerator based stereotactic treatments with frameless patient setup systems. Additionally, the need for re-verification of the patient setup for situations where couch rotations are involved was analyzed for patients treated in the cranial region. METHODS AND MATERIALS A total of 2185 initial (i.e. after pre-positioning the patient with the infrared system but before image guidance) patient setup errors (1705 in the cranial and 480 in the extracranial region) obtained by using ExacTrac (BrainLAB AG, Feldkirchen, Germany) were analyzed. Additionally, the patient setup errors as a function of the couch rotation angle were obtained by analyzing 242 setup errors in the cranial region. Before the couch was rotated, the patient setup error was corrected at couch rotation angle 0° with the aid of image guidance and the six degrees of freedom (6DoF) couch. For both situations attainment rates for two different tolerances (tolerance A: ± 0.5mm, ± 0.5°; tolerance B: ± 1.0 mm, ± 1.0°) were calculated. RESULTS The mean (± one standard deviation) initial patient setup errors for the cranial cases were -0.24 ± 1.21°, -0.23 ± 0.91° and -0.03 ± 1.07° for the pitch, roll and couch rotation axes and 0.10 ± 1.17 mm, 0.10 ± 1.62 mm and 0.11 ± 1.29 mm for the lateral, longitudinal and vertical axes, respectively. Attainment rate (all six axes simultaneously) for tolerance A was 0.6% and 13.1% for tolerance B, respectively. For the extracranial cases the corresponding values were -0.21 ± 0.95°, -0.05 ± 1.08° and -0.14 ± 1.02° for the pitch, roll and couch rotation axes and 0.15 ± 1.77 mm, 0.62 ± 1.94 mm and -0.40 ± 2.15 mm for the lateral, longitudinal and vertical axes. Attainment rate (all six axes simultaneously) for tolerance A was 0.0% and 3.1% for tolerance B, respectively. After initial setup correction and rotation of the couch to treatment position a re-correction has to be performed in 77.4% of all cases to fulfill tolerance A and in 15.6% of all cases to fulfill tolerance B. CONCLUSION The analysis of the data shows that all six axes of a 6DoF couch are used extensively for patient setup in clinical routine. In order to fulfill high patient setup accuracies (e.g. for stereotactic treatments), a 6DoF couch is recommended. Moreover, re-verification of the patient setup after rotating the couch is required in clinical routine.
Resumo:
Abstract Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used.
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde.
Resumo:
Mestrado em Radioterapia
Resumo:
Mestrado em Radioterapia
Resumo:
This note investigates the adequacy of the finite-sample approximation provided by the Functional Central Limit Theorem (FCLT) when the errors are allowed to be dependent. We compare the distribution of the scaled partial sums of some data with the distribution of the Wiener process to which it converges. Our setup is purposely very simple in that it considers data generated from an ARMA(1,1) process. Yet, this is sufficient to bring out interesting conclusions about the particular elements which cause the approximations to be inadequate in even quite large sample sizes.
Resumo:
The long-term stability, high accuracy, all-weather capability, high vertical resolution, and global coverage of Global Navigation Satellite System (GNSS) radio occultation (RO) suggests it as a promising tool for global monitoring of atmospheric temperature change. With the aim to investigate and quantify how well a GNSS RO observing system is able to detect climate trends, we are currently performing an (climate) observing system simulation experiment over the 25-year period 2001 to 2025, which involves quasi-realistic modeling of the neutral atmosphere and the ionosphere. We carried out two climate simulations with the general circulation model MAECHAM5 (Middle Atmosphere European Centre/Hamburg Model Version 5) of the MPI-M Hamburg, covering the period 2001–2025: One control run with natural variability only and one run also including anthropogenic forcings due to greenhouse gases, sulfate aerosols, and tropospheric ozone. On the basis of this, we perform quasi-realistic simulations of RO observables for a small GNSS receiver constellation (six satellites), state-of-the-art data processing for atmospheric profiles retrieval, and a statistical analysis of temperature trends in both the “observed” climatology and the “true” climatology. Here we describe the setup of the experiment and results from a test bed study conducted to obtain a basic set of realistic estimates of observational errors (instrument- and retrieval processing-related errors) and sampling errors (due to spatial-temporal undersampling). The test bed results, obtained for a typical summer season and compared to the climatic 2001–2025 trends from the MAECHAM5 simulation including anthropogenic forcing, were found encouraging for performing the full 25-year experiment. They indicated that observational and sampling errors (both contributing about 0.2 K) are consistent with recent estimates of these errors from real RO data and that they should be sufficiently small for monitoring expected temperature trends in the global atmosphere over the next 10 to 20 years in most regions of the upper troposphere and lower stratosphere (UTLS). Inspection of the MAECHAM5 trends in different RO-accessible atmospheric parameters (microwave refractivity and pressure/geopotential height in addition to temperature) indicates complementary climate change sensitivity in different regions of the UTLS so that optimized climate monitoring shall combine information from all climatic key variables retrievable from GNSS RO data.
Resumo:
We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which points to some difficulties in the interpretation of such predictors. (C) 2011 Elsevier By. All rights reserved.
Resumo:
Das aSPECT Spektrometer wurde entworfen, um das Spektrum der Protonen beimrnZerfall freier Neutronen mit hoher Präzision zu messen. Aus diesem Spektrum kann dann der Elektron-Antineutrino Winkelkorrelationskoeffizient "a" mit hoher Genauigkeit bestimmt werden. Das Ziel dieses Experiments ist es, diesen Koeffizienten mit einem absoluten relativen Fehler von weniger als 0.3% zu ermitteln, d.h. deutlich unter dem aktuellen Literaturwert von 5%.rnrnErste Messungen mit dem aSPECT Spektrometer wurden an der Forschungsneutronenquelle Heinz Maier-Leibnitz in München durchgeführt. Jedoch verhinderten zeitabhängige Instabilitäten des Meßhintergrunds eine neue Bestimmung von "a".rnrnDie vorliegende Arbeit basiert hingegen auf den letzten Messungen mit dem aSPECTrnSpektrometer am Institut Laue-Langevin (ILL) in Grenoble, Frankreich. Bei diesen Messungen konnten die Instabilitäten des Meßhintergrunds bereits deutlich reduziert werden. Weiterhin wurden verschiedene Veränderungen vorgenommen, um systematische Fehler zu minimieren und um einen zuverlässigeren Betrieb des Experiments sicherzustellen. Leider konnte aber wegen zu hohen Sättigungseffekten der Empfängerelektronik kein brauchbares Ergebnis gemessen werden. Trotzdem konnten diese und weitere systematische Fehler identifiziert und verringert, bzw. sogar teilweise eliminiert werden, wovon zukünftigernStrahlzeiten an aSPECT profitieren werden.rnrnDer wesentliche Teil der vorliegenden Arbeit befasst sich mit der Analyse und Verbesserung der systematischen Fehler, die durch das elektromagnetische Feld aSPECTs hervorgerufen werden. Hieraus ergaben sich vielerlei Verbesserungen, insbesondere konnten die systematischen Fehler durch das elektrische Feld verringert werden. Die durch das Magnetfeld verursachten Fehler konnten sogar soweit minimiert werden, dass nun eine Verbesserung des aktuellen Literaturwerts von "a" möglich ist. Darüber hinaus wurde in dieser Arbeit ein für den Versuch maßgeschneidertes NMR-Magnetometer entwickelt und soweit verbessert, dass nun Unsicherheiten bei der Charakterisierung des Magnetfeldes soweit reduziert wurden, dass sie für die Bestimmung von "a" mit einer Genauigkeit von mindestens 0.3% vernachlässigbar sind.
Resumo:
Purpose – The purpose of this paper is to investigate the optimization for a placement machine in printed circuit board (PCB) assembly when family setup strategy is adopted. Design/methodology/approach – A complete mathematical model is developed for the integrated problem to optimize feeder arrangement and component placement sequences so as to minimize the makespan for a set of PCB batches. Owing to the complexity of the problem, a specific genetic algorithm (GA) is proposed. Findings – The established model is able to find the minimal makespan for a set of PCB batches through determining the feeder arrangement and placement sequences. However, exact solutions to the problem are not practical due to the complexity. Experimental tests show that the proposed GA can solve the problem both effectively and efficiently. Research limitations/implications – When a placement machine is set up for production of a set of PCB batches, the feeder arrangement of the machine together with the component placement sequencing for each PCB type should be solved simultaneously so as to minimize the overall makespan. Practical implications – The paper investigates the optimization for PCB assembly with family setup strategy, which is adopted by many PCB manufacturers for reducing both setup costs and human errors. Originality/value – The paper investigates the feeder arrangement and placement sequencing problems when family setup strategy is adopted, which has not been studied in the literature.
Resumo:
Purpose: To establish the prevalence of refractive errors and ocular disorders in preschool and schoolchildren of Ibiporã, Brazil. Methods: A survey of 6 to 12-year-old children from public and private elementary schools was carried out in Ibiporã between 1989 and 1996. Visual acuity measurements were performed by trained teachers using Snellen's chart. Children with visual acuity <0.7 in at least one eye were referred to a complete ophthalmologic examination. Results: 35,936 visual acuity measurements were performed in 13,471 children. 1.966 children (14.59%) were referred to an ophthalmologic examination. Amblyopia was diagnosed in 237 children (1.76%), whereas strabismus was observed in 114 cases (0.84%). Cataract (n=17) (0.12%), chorioretinitis (n=38) (0.28%) and eyelid ptosis (n=6) (0.04%) were also diagnosed. Among the 614 (4.55%) children who were found to have refractive errors, 284 (46.25%) had hyperopia (hyperopia or hyperopic astigmatism), 206 (33.55%) had myopia (myopia or myopic astigmatism) and 124 (20.19%) showed mixed astigmatism. Conclusions: The study determined the local prevalence of amblyopia, refractive errors and eye disorders among preschool and schoolchildren.
Resumo:
This paper addresses the capacitated lot sizing problem (CLSP) with a single stage composed of multiple plants, items and periods with setup carry-over among the periods. The CLSP is well studied and many heuristics have been proposed to solve it. Nevertheless, few researches explored the multi-plant capacitated lot sizing problem (MPCLSP), which means that few solution methods were proposed to solve it. Furthermore, to our knowledge, no study of the MPCLSP with setup carry-over was found in the literature. This paper presents a mathematical model and a GRASP (Greedy Randomized Adaptive Search Procedure) with path relinking to the MPCLSP with setup carry-over. This solution method is an extension and adaptation of a previously adopted methodology without the setup carry-over. Computational tests showed that the improvement of the setup carry-over is significant in terms of the solution value with a low increase in computational time.
Resumo:
A photometric procedure for the determination of ClO(-) in tap water employing a miniaturized multicommuted flow analysis setup and an LED-based photometer is described. The analytical procedure was implemented using leucocrystal violet (LCV; 4,4', 4 ''-methylidynetris (N, N-dimethylaniline), C(25)H(31)N(3)) as a chromogenic reagent. Solenoid micropumps employed for solutions propelling were assembled together with the photometer in order to compose a compact unit of small dimensions. After control variables optimization, the system was applied for the determination of ClO(-) in samples of tap water, and aiming accuracy assessment samples were also analyzed using an independent method. Applying the paired t-test between results obtained using both methods, no significant difference at the 95% confidence level was observed. Other useful features include low reagent consumption, 2.4 mu g of LCV per determination, a linear response ranging from 0.02 up to 2.0 mg L(-1) ClO(-), a relative standard deviation of 1.0% (n = 11) for samples containing 0.2 mg L(-1) ClO(-), a detection limit of 6.0 mu g L(-1) ClO(-), a sampling throughput of 84 determinations per hour, and a waste generation of 432 mu L per determination.
Resumo:
In this work we investigate knowledge acquisition as performed by multiple agents interacting as they infer, under the presence of observation errors, respective models of a complex system. We focus the specific case in which, at each time step, each agent takes into account its current observation as well as the average of the models of its neighbors. The agents are connected by a network of interaction of Erdos-Renyi or Barabasi-Albert type. First, we investigate situations in which one of the agents has a different probability of observation error (higher or lower). It is shown that the influence of this special agent over the quality of the models inferred by the rest of the network can be substantial, varying linearly with the respective degree of the agent with different estimation error. In case the degree of this agent is taken as a respective fitness parameter, the effect of the different estimation error is even more pronounced, becoming superlinear. To complement our analysis, we provide the analytical solution of the overall performance of the system. We also investigate the knowledge acquisition dynamic when the agents are grouped into communities. We verify that the inclusion of edges between agents (within a community) having higher probability of observation error promotes the loss of quality in the estimation of the agents in the other communities.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.