990 resultados para 1 sigma error
Resumo:
The new stage of the Mainz Microtron, MAMI, at the Institute for Nuclear Physics of the Johannes Gutenberg-University, operational since 2007, allows open strangeness experiments to be performed. Covering the lack of electroproduction data at very low Q2, p(e,K+)Lambda and p(e,K+)Sigma0, reactions have been studied at Q^2 = 0.036(GeV/c)^2 andrnQ^2 = 0.05(GeV=c)^2 in a large angular range. Cross-section at W=1.75rnGeV will be given in angular bins and compared with the predictions of Saclay-Lyon and Kaon Maid isobaric models. We conclude that the original Kaon-Maid model, which has large longitudinal couplings of the photon to nucleon resonances, is unphysical. Extensive studies for the suitability of silicon photomultipliers as read out devices for a scintillating fiber tracking detector, with potential applications in both positive and negative arms of the spectrometer, will be presented as well.
Resumo:
Die elektromagnetischen Nukleon-Formfaktoren sind fundamentale Größen, welche eng mit der elektromagnetischen Struktur der Nukleonen zusammenhängen. Der Verlauf der elektrischen und magnetischen Sachs-Formfaktoren G_E und G_M gegen Q^2, das negative Quadrat des Viererimpulsübertrags im elektromagnetischen Streuprozess, steht über die Fouriertransformation in direkter Beziehung zu der räumlichen Ladungs- und Strom-Verteilung in den Nukleonen. Präzise Messungen der Formfaktoren über einen weiten Q^2-Bereich werden daher für ein quantitatives Verständnis der Nukleonstruktur benötigt.rnrnDa es keine freien Neutrontargets gibt, gestaltet sich die Messung der Neutron-Formfaktoren schwierig im Vergleich zu der Messung am Proton. Konsequenz daraus ist, dass die Genauigkeit der vorhandenen Daten von Neutron-Formfaktoren deutlich geringer ist als die von Formfaktoren des Protons; auch der vermessene Q^2-Bereich ist kleiner. Insbesondere der elektrische Sachs-Formfaktor des Neutrons G_E^n ist schwierig zu messen, da er aufgrund der verschwindenden Nettoladung des Neutrons im Verhältnis zu den übrigen Nukleon-Formfaktoren sehr klein ist. G_E^n charakterisiert die Ladungsverteilung des elektrisch neutralen Neutrons und ist damit besonders sensitiv auf die innere Struktur des Neutrons.rnrnIn der hier vorgestellten Arbeit wurde G_E^n aus Strahlhelizitätsasymmetrien in der quasielastischen Streuung vec{3He}(vec{e}, e'n)pp bei einem Impulsübertrag von Q^2 = 1.58 (GeV/c)^2 bestimmt. Die Messung fand in Mainz an der Elektronbeschleunigeranlage Mainzer Mikrotron innerhalb der A1-Kollaboration im Sommer 2008 statt. rnrnLongitudinal polarisierte Elektronen mit einer Energie von 1.508 GeV wurden an einem polarisierten ^3He-Gastarget, das als effektives, polarisiertes Neutrontarget diente, gestreut. Die gestreuten Elektronen wurden in Koinzidenz mit den herausgeschlagenen Neutronen detektiert; die Elektronen wurden in einem magnetischen Spektrometer nachgewiesen, durch den Nachweis der Neutronen in einer Matrix aus Plastikszintillatoren wurde der Beitrag der quasielastischen Streuung am Proton unterdrückt.rnrnAsymmetrien des Wirkungsquerschnitts bezüglich der Elektronhelizität sind bei Orientierung der Targetpolarisation in der Streuebene und senkrecht zum Impulsübertrag sensitiv auf G_E^n / G_M^n; mittels deren Messung kann G_E^n bestimmt werden, da der magnetische Formfaktor G_M^n mit vergleichsweise hoher Präzision bekannt ist. Zusätzliche Messungen der Asymmetrie bei einer Polarisationsorientierung parallel zum Impulsübertrag wurden genutzt, um systematische Fehler zu reduzieren.rnrnFür die Messung inklusive statistischem (stat) und systematischem (sys) Fehler ergab sich G_E^n = 0.0244 +/- 0.0057_stat +/- 0.0016_sys.
Resumo:
The uncertainties in the determination of the stratigraphic profile of natural soils is one of the main problems in geotechnics, in particular for landslide characterization and modeling. The study deals with a new approach in geotechnical modeling which relays on a stochastic generation of different soil layers distributions, following a boolean logic – the method has been thus called BoSG (Boolean Stochastic Generation). In this way, it is possible to randomize the presence of a specific material interdigitated in a uniform matrix. In the building of a geotechnical model it is generally common to discard some stratigraphic data in order to simplify the model itself, assuming that the significance of the results of the modeling procedure would not be affected. With the proposed technique it is possible to quantify the error associated with this simplification. Moreover, it could be used to determine the most significant zones where eventual further investigations and surveys would be more effective to build the geotechnical model of the slope. The commercial software FLAC was used for the 2D and 3D geotechnical model. The distribution of the materials was randomized through a specifically coded MatLab program that automatically generates text files, each of them representing a specific soil configuration. Besides, a routine was designed to automate the computation of FLAC with the different data files in order to maximize the sample number. The methodology is applied with reference to a simplified slope in 2D, a simplified slope in 3D and an actual landslide, namely the Mortisa mudslide (Cortina d’Ampezzo, BL, Italy). However, it could be extended to numerous different cases, especially for hydrogeological analysis and landslide stability assessment, in different geological and geomorphological contexts.
Resumo:
In questo momento i servizi costituiscono il principale settore d’impiego e la maggior fonte di reddito per le economie sviluppate, rappresentando circa tre quarti del prodotto interno lordo sia negli Stati Uniti che nel Regno Unito. (Piercy e Rich, 2009) Nonostante però questa notevole importanza per l’economia, le organizzazioni di questo settore non riescono a fornire dei servizi di qualità tale da soddisfare le richieste dei clienti. (Piercy e Rich, 2009) Ancora più preoccupante è il risultato degli indicatori che forniscono un livello di qualità dei servizi in calo di anno in anno. (Dickson et al., 2005) Questo lavoro di tesi si occupa di analizzare il Lean Six Sigma come metodologia di cambiamento organizzativo e miglioramento dei processi aziendali, nel contesto dei servizi e in modo particolare in quelli finanziari. L’obiettivo di questo lavoro è quello di presentare il Lean Six Sigma applicato ai servizi analizzando i fattori critici di successo, i fattori ostativi, le barriere organizzative interne, le differenze tra il settore manifatturiero e quello dei servizi, gli strumenti, gli obiettivi e i benefici introdotti. Si vuole inoltre indagare l’applicazione di tale metodologia a un’azienda italiana di piccole e medie dimensioni esaminando le caratteristiche da tenere in considerazione durante la sua implementazione.
Resumo:
Il lavoro descrive la progettazione, l'implementazione e il test sperimentale di un meccanismo, integrato nel kernel Linux 4.0, dedicato al riconoscimento delle perdite dei frame Wi-Fi.
Resumo:
I Polar Codes sono la prima classe di codici a correzione d’errore di cui è stato dimostrato il raggiungimento della capacità per ogni canale simmetrico, discreto e senza memoria, grazie ad un nuovo metodo introdotto recentemente, chiamato ”Channel Polarization”. In questa tesi verranno descritti in dettaglio i principali algoritmi di codifica e decodifica. In particolare verranno confrontate le prestazioni dei simulatori sviluppati per il ”Successive Cancellation Decoder” e per il ”Successive Cancellation List Decoder” rispetto ai risultati riportati in letteratura. Al fine di migliorare la distanza minima e di conseguenza le prestazioni, utilizzeremo uno schema concatenato con il polar code come codice interno ed un CRC come codice esterno. Proporremo inoltre una nuova tecnica per analizzare la channel polarization nel caso di trasmissione su canale AWGN che risulta il modello statistico più appropriato per le comunicazioni satellitari e nelle applicazioni deep space. In aggiunta, investigheremo l’importanza di una accurata approssimazione delle funzioni di polarizzazione.
Resumo:
Modern imaging technologies, such as computed tomography (CT) techniques, represent a great challenge in forensic pathology. The field of forensics has experienced a rapid increase in the use of these new techniques to support investigations on critical cases, as indicated by the implementation of CT scanning by different forensic institutions worldwide. Advances in CT imaging techniques over the past few decades have finally led some authors to propose that virtual autopsy, a radiological method applied to post-mortem analysis, is a reliable alternative to traditional autopsy, at least in certain cases. The authors investigate the occurrence and the causes of errors and mistakes in diagnostic imaging applied to virtual autopsy. A case of suicide by a gunshot wound was submitted to full-body CT scanning before autopsy. We compared the first examination of sectional images with the autopsy findings and found a preliminary misdiagnosis in detecting a peritoneal lesion by gunshot wound that was due to radiologist's error. Then we discuss a new emerging issue related to the risk of diagnostic failure in virtual autopsy due to radiologist's error that is similar to what occurs in clinical radiology practice.
Resumo:
Patients can make contributions to the safety of chemotherapy administration but little is known about their motivations to participate in safety-enhancing strategies. The theory of planned behavior was applied to analyze attitudes, norms, behavioral control, and chemotherapy patients' intentions to participate in medical error prevention.
Resumo:
Medical errors are a serious threat to chemotherapy patients. Patients can make contributions to safety but little is known about the acceptability of error-preventing behaviors and its predictors.
Resumo:
The excitonic splitting between the S-1 and S-2 electronic states of the doubly hydrogen-bonded dimer 2-pyridone center dot 6-methyl-2-pyridone (2PY center dot 6M2PY) is studied in a supersonic jet, applying two-color resonant two-photon ionization (2C-R2PI), UV-UV depletion, and dispersed fluorescence spectroscopies. In contrast to the C-2h symmetric (2-pyridone) 2 homodimer, in which the S-1 <- S-0 transition is symmetry-forbidden but the S-2 <- S-0 transition is allowed, the symmetry-breaking by the additional methyl group in 2PY center dot 6M2PY leads to the appearance of both the S-1 and S-2 origins, which are separated by Delta(exp) = 154 cm(-1). When combined with the separation of the S-1 <- S-0 excitations of 6M2PY and 2PY, which is delta = 102 cm(-1), one obtains an S-1/S-2 exciton coupling matrix element of V-AB, el = 57 cm(-1) in a Frenkel-Davydov exciton model. The vibronic couplings in the S-1/S-2 <- S-0 spectrum of 2PY center dot 6M2PY are treated by the Fulton-Gouterman single-mode model. We consider independent couplings to the intramolecular 6a' vibration and to the intermolecular sigma' stretch, and obtain a semi-quantitative fit to the observed spectrum. The dimensionless excitonic couplings are C(6a') = 0.15 and C(sigma') = 0.05, which places this dimer in the weak-coupling limit. However, the S-1/S-2 state exciton splittings Delta(calc) calculated by the configuration interaction singles method (CIS), time-dependent Hartree-Fock (TD-HF), and approximate second-order coupled-cluster method (CC2) are between 1100 and 1450 cm(-1), or seven to nine times larger than observed. These huge errors result from the neglect of the coupling to the optically active intra-and intermolecular vibrations of the dimer, which lead to vibronic quenching of the purely electronic excitonic splitting. For 2PY center dot 6M2PY the electronic splitting is quenched by a factor of similar to 30 (i.e., the vibronic quenching factor is Gamma(exp) = 0.035), which brings the calculated splittings into close agreement with the experimentally observed value. The 2C-R2PI and fluorescence spectra of the tautomeric species 2-hydroxypyridine center dot 6-methyl-2-pyridone (2HP center dot 6M2PY) are also observed and assigned. (C) 2011 American Institute of Physics.
Resumo:
The purpose of this study was (1) to determine frequency and type of medication errors (MEs), (2) to assess the number of MEs prevented by registered nurses, (3) to assess the consequences of ME for patients, and (4) to compare the number of MEs reported by a newly developed medication error self-reporting tool to the number reported by the traditional incident reporting system. We conducted a cross-sectional study on ME in the Cardiovascular Surgery Department of Bern University Hospital in Switzerland. Eligible registered nurses (n = 119) involving in the medication process were included. Data on ME were collected using an investigator-developed medication error self reporting tool (MESRT) that asked about the occurrence and characteristics of ME. Registered nurses were instructed to complete a MESRT at the end of each shift even if there was no ME. All MESRTs were completed anonymously. During the one-month study period, a total of 987 MESRTs were returned. Of the 987 completed MESRTs, 288 (29%) indicated that there had been an ME. Registered nurses reported preventing 49 (5%) MEs. Overall, eight (2.8%) MEs had patient consequences. The high response rate suggests that this new method may be a very effective approach to detect, report, and describe ME in hospitals.
Resumo:
BACKGROUND: Physiological data obtained with the pulmonary artery catheter (PAC) are susceptible to errors in measurement and interpretation. Little attention has been paid to the relevance of errors in hemodynamic measurements performed in the intensive care unit (ICU). The aim of this study was to assess the errors related to the technical aspects (zeroing and reference level) and actual measurement (curve interpretation) of the pulmonary artery occlusion pressure (PAOP). METHODS: Forty-seven participants in a special ICU training program and 22 ICU nurses were tested without pre-announcement. All participants had previously been exposed to the clinical use of the method. The first task was to set up a pressure measurement system for PAC (zeroing and reference level) and the second to measure the PAOP. RESULTS: The median difference from the reference mid-axillary zero level was - 3 cm (-8 to + 9 cm) for physicians and -1 cm (-5 to + 1 cm) for nurses. The median difference from the reference PAOP was 0 mmHg (-3 to 5 mmHg) for physicians and 1 mmHg (-1 to 15 mmHg) for nurses. When PAOP values were adjusted for the differences from the reference transducer level, the median differences from the reference PAOP values were 2 mmHg (-6 to 9 mmHg) for physicians and 2 mmHg (-6 to 16 mmHg) for nurses. CONCLUSIONS: Measurement of the PAOP is susceptible to substantial error as a result of practical mistakes. Comparison of results between ICUs or practitioners is therefore not possible.