855 resultados para SYSTEMATIC-ERROR CORRECTION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clarke's matrix has been used as an eigenvector matrix for transposed three-phase transmission lines and it can be applied as a phase-mode transformation matrix for transposed cases. Considering untransposed three-phase transmission lines, Clarke's matrix is not an exact eigenvector matrix. In this case, the errors related to the diagonal elements of the Z and Y matrices can be considered negligible, if these diagonal elements are compared to the exact elements in domain mode. The mentioned comparisons are performed based on the error and frequency scan analyses. From these analyses and considering untransposed asymmetrical three-phase transmission lines, a correction procedure is determined searching for better results from the Clarke's matrix use as a phase-mode transformation matrix. Using the Clarke's matrix, the relative errors of the eigenvalue matrix elements can be considered negligible and the relative values of the off-diagonal elements are significant. Applying the corrected transformation matrices, the relative values of the off-diagonal elements are decreased. The comparisons among the results of these analyses show that the homopolar mode is more sensitive to the frequency influence than the two other modes related to three-phase lines. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the positioning systems that compose GNSS (Global Navigation Satellite System), GPS has the capability of providing low, medium and high precision positioning data. However, GPS observables may be subject to many different types of errors. These systematic errors can degrade the accuracy of the positioning provided by GPS. These errors are mainly related to GPS satellite orbits, multipath, and atmospheric effects. In order to mitigate these errors, a semiparametric model and the penalized least squares technique were employed in this study. This is similar to changing the stochastical model, in which error functions are incorporated and the results are similar to those in which the functional model is changed instead. Using this method, it was shown that ambiguities and the estimation of station coordinates were more reliable and accurate than when employing a conventional least squares methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GPS observables are subject to several errors. Among them, the systematic ones have great impact, because they degrade the accuracy of the accomplished positioning. These errors are those related, mainly, to GPS satellites orbits, multipath and atmospheric effects. Lately, a method has been suggested to mitigate these errors: the semiparametric model and the penalised least squares technique (PLS). In this method, the errors are modeled as functions varying smoothly in time. It is like to change the stochastic model, in which the errors functions are incorporated, the results obtained are similar to those in which the functional model is changed. As a result, the ambiguities and the station coordinates are estimated with better reliability and accuracy than the conventional least square method (CLS). In general, the solution requires a shorter data interval, minimizing costs. The method performance was analyzed in two experiments, using data from single frequency receivers. The first one was accomplished with a short baseline, where the main error was the multipath. In the second experiment, a baseline of 102 km was used. In this case, the predominant errors were due to the ionosphere and troposphere refraction. In the first experiment, using 5 minutes of data collection, the largest coordinates discrepancies in relation to the ground truth reached 1.6 cm and 3.3 cm in h coordinate for PLS and the CLS, respectively, in the second one, also using 5 minutes of data, the discrepancies were 27 cm in h for the PLS and 175 cm in h for the CLS. In these tests, it was also possible to verify a considerable improvement in the ambiguities resolution using the PLS in relation to the CLS, with a reduced data collection time interval. © Springer-Verlag Berlin Heidelberg 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I. Gunter and Christmas (1973) described the events leading to the stranding of a baleen whale on Ship Island, Mississippi, in 1968, giving the species as Balaenopteru physalus, the Rorqual. Unfortunately the identification was in error, but fortunately good photographs were shown. The underside of the tail was a splotched white, but there was no black margin. The specimen also had fewer throat and belly grooves than the Rorqual, as a comparison with True’s (1904) photograph shows. Dr. James Mead (in litt.) pointed out that the animal was a Sei Whale, Balaenoptera borealis. This remains a new Mississippi record and according to Lowery’s (1974) count, it is the fifth specimen reported from the Gulf of Mexico. The stranding of a sixth Sei Whale on Anclote Keys in the Gulf, west of Tarpon Springs, Florida on 30 May 1974, was reported in the newspapers and by the Smithsonian Institution (1974). II. Gunter, Hubbs and Beal (1955) gave measurements on a Pygmy Sperm Whale, Kogia breviceps, which stranded on Mustang Island on the Texas coast and commented upon the recorded variations of proportional measurements in this species. Then according to Raun, Hoese and Moseley (1970) these questions were resolved by Handley (1966), who showed that a second species, Kogia simus, the Dwarf Sperm Whale, is also present in the western North Atlantic. Handley’s argument is based on skull comparisons and it seems to be rather indubitable. According to Raun et al. (op. cit.), the stranding of a species of Kogia on Galveston Island recorded by Caldwell, Ingles and Siebenaler (1960) was K. simus. They also say that Caldwell (in litt.) had previously come to the same conclusion. Caldwell et al. also recorded another specimen from Destin, Florida, which is now considered to have been a specimen of simus. The known status of these two little sperm whales in the Gulf is summarized by Lowery (op. cit.).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimates of evapotranspiration on a local scale is important information for agricultural and hydrological practices. However, equations to estimate potential evapotranspiration based only on temperature data, which are simple to use, are usually less trustworthy than the Food and Agriculture Organization (FAO)Penman-Monteith standard method. The present work describes two correction procedures for potential evapotranspiration estimates by temperature, making the results more reliable. Initially, the standard FAO-Penman-Monteith method was evaluated with a complete climatologic data set for the period between 2002 and 2006. Then temperature-based estimates by Camargo and Jensen-Haise methods have been adjusted by error autocorrelation evaluated in biweekly and monthly periods. In a second adjustment, simple linear regression was applied. The adjusted equations have been validated with climatic data available for the Year 2001. Both proposed methodologies showed good agreement with the standard method indicating that the methodology can be used for local potential evapotranspiration estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il tumore al seno è il più comune tra le donne nel mondo. La radioterapia è comunemente usata dopo la chirurgia per distruggere eventuali cellule maligne rimaste nel volume del seno. Nei trattamenti di radioterapia bisogna cercare di irradiare il volume da curare limitando contemporaneamente la tossicità nei tessuti sani. In clinica i parametri che definiscono il piano di trattamento radioterapeutico sono selezionati manualmente utilizzando un software di simulazione per trattamenti. Questo processo, detto di trial and error, in cui i differenti parametri vengono modificati e il trattamento viene simulato nuovamente e valutato, può richiedere molte iterazioni rendendolo dispendioso in termini di tempo. Lo studio presentato in questa tesi si concentra sulla generazione automatica di piani di trattamento per irradiare l'intero volume del seno utilizzando due fasci approssimativamente opposti e tangenti al paziente. In particolare ci siamo concentrati sulla selezione delle direzioni dei fasci e la posizione dell'isocentro. A questo scopo, è stato investigata l'efficacia di un approccio combinatorio, nel quale sono stati generati un elevato numero di possibili piani di trattamento utilizzando differenti combinazioni delle direzioni dei due fasci. L'intensità del profilo dei fasci viene ottimizzata automaticamente da un algoritmo, chiamato iCycle, sviluppato nel ospedale Erasmus MC di Rotterdam. Inizialmente tra tutti i possibili piani di trattamento generati solo un sottogruppo viene selezionato, avente buone caratteristiche per quel che riguarda l'irraggiamento del volume del seno malato. Dopo di che i piani che mostrano caratteristiche ottimali per la salvaguardia degli organi a rischio (cuore, polmoni e seno controlaterale) vengono considerati. Questi piani di trattamento sono matematicamente equivalenti quindi per selezionare tra questi il piano migliore è stata utilizzata una somma pesata dove i pesi sono stati regolati per ottenere in media piani che abbiano caratteristiche simili ai piani di trattamento approvati in clinica. Questo metodo in confronto al processo manuale oltre a ridurre considerevol-mente il tempo di generazione di un piano di trattamento garantisce anche i piani selezionati abbiano caratteristiche ottimali nel preservare gli organi a rischio. Inizialmente è stato utilizzato l'isocentro scelto in clinica dal tecnico. Nella parte finale dello studio l'importanza dell'isocentro è stata valutata; ne è risultato che almeno per un sottogruppo di pazienti la posizione dell'isocentro può dare un importante contributo alla qualità del piano di trattamento e quindi potrebbe essere un ulteriore parametro da ottimizzare. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Im Rahmen des A4-Experiments werden die Beiträge des Strange-Quarks zu den elektromagnetischen Formfaktoren des Protons gemessen. Solche Seequarkeffekte bei Niederenergieobservablen sind für das Verständnis der Hadronenstruktur wichtig, denn sie stellen eine direkte Manifestation der QCD-Freiheitsgrade im nichtperturbativen Bereich dar.rnrnLinearkombinationen der Strangeness-Vektorformfaktoren des Protons $G_E^s$ und $G_M^s$ sind experimentell über die Messung der paritätsverletzenden Asymmetrie im Wirkungsquerschnitt der elastischen Streuung longitudinal polarisierter Elektronen an unpolarisierten Nukleonen zugänglich. Vor dieser Arbeit hatte die A4-Kollaboration zwei solche Messungen unter Vorwärtsstreuwinkeln bei den Viererimpulsübertägen $Q^2$ von jeweils 0.23 und 0.10 (GeV/c)$^2$ veröffentlicht. Um die Separation von $G_E^s$ und $G_M^s$ beim höheren $Q^2$-Wert zu erhalten, wurde eine Messung unter Rückwärtswinkeln mit der Strahlenergie von 315 MeV durchgeführt.rnrnIm A4-Experiment werden die an einem Flüssigwasserstoff-Target gestreuten Elektronen eines longitudinal polarisierten Strahls mit einem Cherenkov-Kalorimeter einzeln gezählt. Durch die kalorimetrische Energiemessung erfolgt die Trennung der elastischen von den inelastischen Ereignissen. Bei Rückwärtswinkeln wurde dieses Apparat mit einem Szintillator als Elektronentagger erweitert, um den $\gamma$-Untergrund aus dem $\pi^0$-Zerfall zu unterdrücken.rnrnUm die Auswertung dieser Messung zu ermöglichen, wurden im Rahmen dieser Arbeit die gemessenen Energiespektren anhand von ausführlichen Simulationen der Streuprozesse und des Antwortverhaltens der Detektoren untersucht, und eine Methode zur Behandlung des restlichen Untergrunds aus der $\gamma$-Konversionrnvor dem Szintillator entwickelt. Die Simulationergebnisse sind auf dem 5%-Niveau mit den Messungen verträglich, und es wurde bewiesen, dass die Methode der Untergrundbehandlung anwendbar ist.rnrnDie Asymmetriemessung bei Rückwärtswinkeln, die man nach Anwendung der hier erarbeiteten Untergrundbehandlung erhält, wurde für die Separation von $G_E^s$ und $G_M^s$ bei $Q^2$=0.22 (GeV/c)^2 mit der Vorwärtswinkelmessung beim selbenrn$Q^2$ kombiniert. Es ergeben sich die Werte:rnrn$G_M^s$= -0.14 ± 0.11_{exp} ± 0.11_{theo} undrn$G_E^s$= 0.050 ± 0.038_{exp} ± 0.019_{theo}, rnrnwobei die systematische Unsicherheit wegen der Untergrundbehandlung im experimentellen Fehler enthalten ist. Am Ende der Arbeit werden die aus diesen Resultaten folgenden Rückschlüsse auf den Einfluss der Strangeness auf die statischen elektromagnetischen Eigenschaften des Protons diskutiert.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Das aSPECT Spektrometer wurde entworfen, um das Spektrum der Protonen beimrnZerfall freier Neutronen mit hoher Präzision zu messen. Aus diesem Spektrum kann dann der Elektron-Antineutrino Winkelkorrelationskoeffizient "a" mit hoher Genauigkeit bestimmt werden. Das Ziel dieses Experiments ist es, diesen Koeffizienten mit einem absoluten relativen Fehler von weniger als 0.3% zu ermitteln, d.h. deutlich unter dem aktuellen Literaturwert von 5%.rnrnErste Messungen mit dem aSPECT Spektrometer wurden an der Forschungsneutronenquelle Heinz Maier-Leibnitz in München durchgeführt. Jedoch verhinderten zeitabhängige Instabilitäten des Meßhintergrunds eine neue Bestimmung von "a".rnrnDie vorliegende Arbeit basiert hingegen auf den letzten Messungen mit dem aSPECTrnSpektrometer am Institut Laue-Langevin (ILL) in Grenoble, Frankreich. Bei diesen Messungen konnten die Instabilitäten des Meßhintergrunds bereits deutlich reduziert werden. Weiterhin wurden verschiedene Veränderungen vorgenommen, um systematische Fehler zu minimieren und um einen zuverlässigeren Betrieb des Experiments sicherzustellen. Leider konnte aber wegen zu hohen Sättigungseffekten der Empfängerelektronik kein brauchbares Ergebnis gemessen werden. Trotzdem konnten diese und weitere systematische Fehler identifiziert und verringert, bzw. sogar teilweise eliminiert werden, wovon zukünftigernStrahlzeiten an aSPECT profitieren werden.rnrnDer wesentliche Teil der vorliegenden Arbeit befasst sich mit der Analyse und Verbesserung der systematischen Fehler, die durch das elektromagnetische Feld aSPECTs hervorgerufen werden. Hieraus ergaben sich vielerlei Verbesserungen, insbesondere konnten die systematischen Fehler durch das elektrische Feld verringert werden. Die durch das Magnetfeld verursachten Fehler konnten sogar soweit minimiert werden, dass nun eine Verbesserung des aktuellen Literaturwerts von "a" möglich ist. Darüber hinaus wurde in dieser Arbeit ein für den Versuch maßgeschneidertes NMR-Magnetometer entwickelt und soweit verbessert, dass nun Unsicherheiten bei der Charakterisierung des Magnetfeldes soweit reduziert wurden, dass sie für die Bestimmung von "a" mit einer Genauigkeit von mindestens 0.3% vernachlässigbar sind.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whether the use of mobile phones is a risk factor for brain tumors in adolescents is currently being studied. Case--control studies investigating this possible relationship are prone to recall error and selection bias. We assessed the potential impact of random and systematic recall error and selection bias on odds ratios (ORs) by performing simulations based on real data from an ongoing case--control study of mobile phones and brain tumor risk in children and adolescents (CEFALO study). Simulations were conducted for two mobile phone exposure categories: regular and heavy use. Our choice of levels of recall error was guided by a validation study that compared objective network operator data with the self-reported amount of mobile phone use in CEFALO. In our validation study, cases overestimated their number of calls by 9% on average and controls by 34%. Cases also overestimated their duration of calls by 52% on average and controls by 163%. The participation rates in CEFALO were 83% for cases and 71% for controls. In a variety of scenarios, the combined impact of recall error and selection bias on the estimated ORs was complex. These simulations are useful for the interpretation of previous case-control studies on brain tumor and mobile phone use in adults as well as for the interpretation of future studies on adolescents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new physics-based technique for correcting inhomogeneities present in sub-daily temperature records is proposed. The approach accounts for changes in the sensor-shield characteristics that affect the energy balance dependent on ambient weather conditions (radiation, wind). An empirical model is formulated that reflects the main atmospheric processes and can be used in the correction step of a homogenization procedure. The model accounts for short- and long-wave radiation fluxes (including a snow cover component for albedo calculation) of a measurement system, such as a radiation shield. One part of the flux is further modulated by ventilation. The model requires only cloud cover and wind speed for each day, but detailed site-specific information is necessary. The final model has three free parameters, one of which is a constant offset. The three parameters can be determined, e.g., using the mean offsets for three observation times. The model is developed using the example of the change from the Wild screen to the Stevenson screen in the temperature record of Basel, Switzerland, in 1966. It is evaluated based on parallel measurements of both systems during a sub-period at this location, which were discovered during the writing of this paper. The model can be used in the correction step of homogenization to distribute a known mean step-size to every single measurement, thus providing a reasonable alternative correction procedure for high-resolution historical climate series. It also constitutes an error model, which may be applied, e.g., in data assimilation approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Assessment of lung volume (FRC) and ventilation inhomogeneities with ultrasonic flowmeter and multiple breath washout (MBW) has been used to provide important information about lung disease in infants. Sub-optimal adjustment of the mainstream molar mass (MM) signal for temperature and external deadspace may lead to analysis errors in infants with critically small tidal volume changes during breathing. METHODS: We measured expiratory temperature in human infants at 5 weeks of age and examined the influence of temperature and deadspace changes on FRC results with computer simulation modeling. A new analysis method with optimized temperature and deadspace settings was then derived, tested for robustness to analysis errors and compared with the previously used analysis methods. RESULTS: Temperature in the facemask was higher and variations of deadspace volumes larger than previously assumed. Both showed considerable impact upon FRC and LCI results with high variability when obtained with the previously used analysis model. Using the measured temperature we optimized model parameters and tested a newly derived analysis method, which was found to be more robust to variations in deadspace. Comparison between both analysis methods showed systematic differences and a wide scatter. CONCLUSION: Corrected deadspace and more realistic temperature assumptions improved the stability of the analysis of MM measurements obtained by ultrasonic flowmeter in infants. This new analysis method using the only currently available commercial ultrasonic flowmeter in infants may help to improve stability of the analysis and further facilitate assessment of lung volume and ventilation inhomogeneities in infants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To assess the literature on accuracy and clinical performance of computer technology applications in surgical implant dentistry. MATERIALS AND METHODS: Electronic and manual literature searches were conducted to collect information about (1) the accuracy and (2) clinical performance of computer-assisted implant systems. Meta-regression analysis was performed for summarizing the accuracy studies. Failure/complication rates were analyzed using random-effects Poisson regression models to obtain summary estimates of 12-month proportions. RESULTS: Twenty-nine different image guidance systems were included. From 2,827 articles, 13 clinical and 19 accuracy studies were included in this systematic review. The meta-analysis of the accuracy (19 clinical and preclinical studies) revealed a total mean error of 0.74 mm (maximum of 4.5 mm) at the entry point in the bone and 0.85 mm at the apex (maximum of 7.1 mm). For the 5 included clinical studies (total of 506 implants) using computer-assisted implant dentistry, the mean failure rate was 3.36% (0% to 8.45%) after an observation period of at least 12 months. In 4.6% of the treated cases, intraoperative complications were reported; these included limited interocclusal distances to perform guided implant placement, limited primary implant stability, or need for additional grafting procedures. CONCLUSION: Differing levels and quantity of evidence were available for computer-assisted implant placement, revealing high implant survival rates after only 12 months of observation in different indications and a reasonable level of accuracy. However, future long-term clinical data are necessary to identify clinical indications and to justify additional radiation doses, effort, and costs associated with computer-assisted implant surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Neuronavigation has become an intrinsic part of preoperative surgical planning and surgical procedures. However, many surgeons have the impression that accuracy decreases during surgery. OBJECTIVE To quantify the decrease of neuronavigation accuracy and identify possible origins, we performed a retrospective quality-control study. METHODS Between April and July 2011, a neuronavigation system was used in conjunction with a specially prepared head holder in 55 consecutive patients. Two different neuronavigation systems were investigated separately. Coregistration was performed with laser-surface matching, paired-point matching using skin fiducials, anatomic landmarks, or bone screws. The initial target registration error (TRE1) was measured using the nasion as the anatomic landmark. Then, after draping and during surgery, the accuracy was checked at predefined procedural landmark steps (Mayfield measurement point and bone measurement point), and deviations were recorded. RESULTS After initial coregistration, the mean (SD) TRE1 was 2.9 (3.3) mm. The TRE1 was significantly dependent on patient positioning, lesion localization, type of neuroimaging, and coregistration method. The following procedures decreased neuronavigation accuracy: attachment of surgical drapes (DTRE2 = 2.7 [1.7] mm), skin retractor attachment (DTRE3 = 1.2 [1.0] mm), craniotomy (DTRE3 = 1.0 [1.4] mm), and Halo ring installation (DTRE3 = 0.5 [0.5] mm). Surgery duration was a significant factor also; the overall DTRE was 1.3 [1.5] mm after 30 minutes and increased to 4.4 [1.8] mm after 5.5 hours of surgery. CONCLUSION After registration, there is an ongoing loss of neuronavigation accuracy. The major factors were draping, attachment of skin retractors, and duration of surgery. Surgeons should be aware of this silent loss of accuracy when using neuronavigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The best-known cause of intolerance to fluoropyrimidines is dihydropyrimidine dehydrogenase (DPD) deficiency, which can result from deleterious polymorphisms in the gene encoding DPD (DPYD), including DPYD*2A and c.2846A>T. Three other variants-DPYD c.1679T>G, c.1236G>A/HapB3, and c.1601G>A-have been associated with DPD deficiency, but no definitive evidence for the clinical validity of these variants is available. The primary objective of this systematic review and meta-analysis was to assess the clinical validity of c.1679T>G, c.1236G>A/HapB3, and c.1601G>A as predictors of severe fluoropyrimidine-associated toxicity. METHODS We did a systematic review of the literature published before Dec 17, 2014, to identify cohort studies investigating associations between DPYD c.1679T>G, c.1236G>A/HapB3, and c.1601G>A and severe (grade ≥3) fluoropyrimidine-associated toxicity in patients treated with fluoropyrimidines (fluorouracil, capecitabine, or tegafur-uracil as single agents, in combination with other anticancer drugs, or with radiotherapy). Individual patient data were retrieved and analysed in a multivariable analysis to obtain an adjusted relative risk (RR). Effect estimates were pooled by use of a random-effects meta-analysis. The threshold for significance was set at a p value of less than 0·0167 (Bonferroni correction). FINDINGS 7365 patients from eight studies were included in the meta-analysis. DPYD c.1679T>G was significantly associated with fluoropyrimidine-associated toxicity (adjusted RR 4·40, 95% CI 2·08-9·30, p<0·0001), as was c.1236G>A/HapB3 (1·59, 1·29-1·97, p<0·0001). The association between c.1601G>A and fluoropyrimidine-associated toxicity was not significant (adjusted RR 1·52, 95% CI 0·86-2·70, p=0·15). Analysis of individual types of toxicity showed consistent associations of c.1679T>G and c.1236G>A/HapB3 with gastrointestinal toxicity (adjusted RR 5·72, 95% CI 1·40-23·33, p=0·015; and 2·04, 1·49-2·78, p<0·0001, respectively) and haematological toxicity (adjusted RR 9·76, 95% CI 3·03-31·48, p=0·00014; and 2·07, 1·17-3·68, p=0·013, respectively), but not with hand-foot syndrome. DPYD*2A and c.2846A>T were also significantly associated with severe fluoropyrimidine-associated toxicity (adjusted RR 2·85, 95% CI 1·75-4·62, p<0·0001; and 3·02, 2·22-4·10, p<0·0001, respectively). INTERPRETATION DPYD variants c.1679T>G and c.1236G>A/HapB3 are clinically relevant predictors of fluoropyrimidine-associated toxicity. Upfront screening for these variants, in addition to the established variants DPYD*2A and c.2846A>T, is recommended to improve the safety of patients with cancer treated with fluoropyrimidines. FUNDING None.