914 resultados para Errors and blunders, Literary.
Resumo:
Many efforts have been devoting since last years to reduce uncertainty in hydrological modeling predictions. The principal sources of uncertainty are provided by input errors, for inaccurate rainfall prediction, and model errors, given by the approximation with which the water flow processes in the soil and river discharges are described. The aim of the present work is to develop a bayesian model in order to reduce the uncertainty in the discharge predictions for the Reno river. The ’a priori’ distribution function is given by an autoregressive model, while the likelihood function is provided by a linear equation which relates observed values of discharge in the past and hydrological TOPKAPI model predictions obtained by the rainfall predictions of the limited-area model COSMO-LAMI. The ’a posteriori’ estimations are provided throw a H∞ filter, because the statistical properties of estimation errors are not known. In this work a stationary and a dual adaptive filter are implemented and compared. Statistical analysis of estimation errors and the description of three case studies of flood events occurred during the fall seasons from 2003 to 2005 are reported. Results have also revealed that errors can be described as a markovian process only at a first approximation. For the same period, an ensemble of ’a posteriori’ estimations is obtained throw the COSMO-LEPS rainfall predictions, but the spread of this ’a posteriori’ ensemble is not enable to encompass observation variability. This fact is related to the building of the meteorological ensemble, whose spread reaches its maximum after 5 days. In the future the use of a new ensemble, COSMO–SREPS, focused on the first 3 days, could be helpful to enlarge the meteorogical and, consequently, the hydrological variability.
Resumo:
In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.
Resumo:
The aim of this PhD thesis is to study accurately and in depth the figure and the literary production of the intellectual Jacopo Aconcio. This minor author of the 16th century has long been considered a sort of “enigmatic character”, a profile which results from the work of those who, for many centuries, have left his writing to its fate: a story of constant re-readings and equally incessant oversights. This is why it is necessary to re-read Aconcio’s production in its entirety and to devote to it a monographic study. Previous scholars’ interpretations will obviously be considered, but at the same time an effort will be made to go beyond them through the analysis of both published and manuscript sources, in the attempt to attain a deeper understanding of the figure of this man, who was a Christian, a military and hydraulic engineer and a political philosopher,. The title of the thesis was chosen to emphasise how, throughout the three years of the doctorate, my research concentrated in equal measure and with the same degree of importance on all the reflections and activities of Jacopo Aconcio. My object, in fact, was to establish how and to what extent the methodological thinking of the intellectual found application in, and at the same time guided, his theoretical and practical production. I did not mention in the title the author’s religious thinking, which has always been considered by everyone the most original and interesting element of his production, because religion, from the Reformation onwards, was primarily a political question and thus it was treated by almost all the authors involved in the Protestant movement - Aconcio in the first place. Even the remarks concerning the private, intimate sphere of faith have therefore been analysed in this light: only by acknowledging the centrality of the “problem of politics” in Aconcio’s theories, in fact, is it possible to interpret them correctly. This approach proves the truth of the theoretical premise to my research, that is to say the unity and orderliness of the author’s thought: in every field of knowledge, Aconcio applies the rules of the methodus resolutiva, as a means to achieve knowledge and elaborate models of pacific cohabitation in society. Aconcio’s continuous references to method can make his writing pedant and rather complex, but at the same time they allow for a consistent and valid analysis of different disciplines. I have not considered the fact that most of his reflections appear to our eyes as strongly conditioned by the time in which he lived as a limit. To see in him, as some have done, the forerunner of Descartes’ methodological discourse or, conversely, to judge his religious theories as not very modern, is to force the thought of an author who was first and foremost a Christian man of his own time. Aconcio repeats this himself several times in his writings: he wants to provide individuals with the necessary tools to reach a full-fledged scientific knowledge in the various fields, and also to enable them to seek truth incessantly in the religious domain, which is the duty of every human being. The will to find rules, instruments, effective solutions characterizes the whole of the author’s corpus: Aconcio feels he must look for truth in all the arts, aware as he is that anything can become science as long as it is analysed with method. Nevertheless, he remains a man of his own time, a Christian convinced of the existence of God, creator and governor of the world, to whom people must account for their own actions. To neglect this fact in order to construct a “character”, a generic forerunner, but not participant, of whatever philosophical current, is a dangerous and sidetracking operation. In this study, I have highlighted how Aconcio’s arguments only reveal their full meaning when read in the context in which they were born, without depriving them of their originality but also without charging them with meanings they do not possess. Through a historical-doctrinal approach, I have tried to analyse the complex web of theories and events which constitute the substratum of Aconcio’s reflection, in order to trace the correct relations between texts and contexts. The thesis is therefore organised in six chapters, dedicated respectively to Aconcio’s biography, to the methodological question, to the author’s engineering activity, to his historical knowledge and to his religious thinking, followed by a last section concerning his fortune throughout the centuries. The above-mentioned complexity is determined by the special historical moment in which the author lived. On the one hand, thanks to the new union between science and technique, the 16th century produces discoveries and inventions which make available a previously unthinkable number of notions and lead to a “revolution” in the way of studying and teaching the different subjects, which, by producing a new form of intellectual, involved in politics but also aware of scientific-technological issues, will contribute to the subsequent birth of modern science. On the other, the 16th century is ravaged by religious conflicts, which shatter the unity of the Christian world and generate theological-political disputes which will inform the history of European states for many decades. My aim is to show how Aconcio’s multifarious activity is the conscious fruit of this historical and religious situation, as well as the attempt of an answer to the request of a new kind of engagement on the intellectual’s behalf. Plunged in the discussions around methodus, employed in the most important European courts, involved in the abrupt acceleration of technical-scientific activities, and especially concerned by the radical religious reformation brought on by the Protestant movement, Jacopo Aconcio reflects this complex conjunction in his writings, without lacking in order and consistency, differently from what many scholars assume. The object of this work, therefore, is to highlight the unity of the author’s thought, in which science, technique, faith and politics are woven into a combination which, although it may appear illogical and confused, is actually tidy and methodical, and therefore in agreement with Aconcio’s own intentions and with the specific characters of European culture in the Renaissance. This theory is confirmed by the reading of the Ars muniendorum oppidorum, Aconcio’s only work which had been up till now unavailable. I am persuaded that only a methodical reading of Aconcio’s works, without forgetting nor glorifying any single one, respects the author’s will. From De methodo (1558) onwards, all his writings are summae, guides for the reader who wishes to approach the study of the various disciplines. Undoubtedly, Satan’s Stratagems (1565) is something more, not only because of its length, but because it deals with the author’s main interest: the celebration of doubt and debate as bases on which to build religious tolerance, which is the best method for pacific cohabitation in society. This, however, does not justify the total centrality which the Stratagems have enjoyed for centuries, at the expense of a proper understanding of the author’s will to offer examples of methodological rigour in all sciences. Maybe it is precisely because of the reforming power of Aconcio’s thought that, albeit often forgotten throughout the centuries, he has never ceased to reappear and continues to draw attention, both as a man and as an author. His ideas never stop stimulating the reader’s curiosity and this may ultimately be the best demonstration of their worth, independently from the historical moment in which they come back to the surface.
Resumo:
Questo progetto intende indagare il rapporto privilegiato che Derek Walcott intesse con gli scrittori che lo hanno preceduto, e in particolare con T.S.Eliot. Attraverso l ‘analisi di un percorso mitografico letterario emerge la rilevanza che per entrambi i poeti assumono sia il paesaggio che il mito. La ricerca svolta si focalizza prevalentemente su The Waste Land di T.S.Eliot e Mappa del Nuovo Mondo di Derek Walcott.
Resumo:
Aureliano Fernandez-Guerra is known especially among Quevedo’s scholars because he published the first complete edition of Quevedo’s works. Few people know his plays and, for this reason, they have never been studied. These plays were written during his youth, when Fernández-Guerra hadn’t decided anything about his career yet. Therefore, these plays were always very important for him and, for this reason, he continued to correct and to revise them. Among them, the unpublished drama La hija de Cervantes (1840) was considered the most important play. In this doctoral thesis I have tried to describe this Spanish author, especially focusing on theatre. In the first part I wrote about the life and the literary works, giving particularly importance to his plays that are La peña de los enamorados (1939), La hija de Cervantes (1840), Alonso Cano (1842) and La Ricahembra (1845), this last one written in collaboration with Manuel Tamayo y Baus, another important and famous playwright. In the second part I deepened the study of La hija de Cervantes because it is a particular interesting drama: Aureliano Fernández-Guerra chose to represent the author of the Quixote as a character of his drama, especially dramatizing the most mysterious moments of his life, such as the Gaspar de Ezpeleta’s murder, his relationship with his daughter Isabel de Saavedra and his supposed love for a woman, whose existence his unknown. Besides, this drama is interesting because it is partially autobiographic: I found several letters and articles where it is emphasized the similarities between Cervantes’ and Aureliano’s life: both feel misunderstood and not appreciated by other people and both had to renounce a big love. In the final part I presented the critical edition of La hija de Cervantes based on the last three manuscripts that are today at the Institut de Teatre in Barcelona. A wide philological note shows the transcription criterions.
Resumo:
Die A4-Kollaboration am Mainzer Mikrotron MAMI erforscht die Struktur des Protons mit Hilfe der elastischen Streuung polarisierter Elektronen an unpolarisiertem Wasserstoff. Bei longitudinaler Polarisation wird eine paritätsverletzende Asymmetrie im Wirkungsquerschnitt gemessen, die Aufschluß über den Beitrag der Strangeness zu den Vektor-Formfaktoren des Protons gibt. Bei transversaler Polarisation treten azimutale Asymmetrien auf, die auf Beiträge des Zwei-Photon-Austauschs zum Wirkungsquerschnitt zurückzuführen sind und den Zugriff auf den Imaginärteil der Zwei-Photon-Amplitude ermöglichen. Im Rahmen der vorliegenden Arbeit wurden Messungen bei zwei Impulsüberträgen und jeweils Longitudinal- und Transversalpolarisation durchgeführt und analysiert. Im Vordergrund standen die Extraktion der Rohasymmetrien aus den Daten, die Korrekturen der Rohasymmetrien auf apparative Asymmetrien, die Abschätzung des systematischen Fehlers und die Bestimmung der Strange-Formfaktoren aus den paritätsverletzenden Asymmetrien. Bei den Messungen mit Longitudinalpolarisation wurden die Asymmetrien zu A=(-5.59 +- 0.57stat +- 0.29syst)ppm bei Q^2=0.23 (GeV/c)^2 und A=(-1.39 +- 0.29stat +- 0.12syst)ppm bei Q^2=0.11(GeV/c)^2 bestimmt. Daraus lassen sich die Linearkombinationen der Strange-Formfaktoren zu GEs+0.225GMs= 0.029 +- 0.034 bzw. GEs+0.106GMs=0.070+-0.035 ermitteln. Die beiden Resultate stehen in Übereinstimmung mit anderen Experimenten und deuten darauf hin, daß es einen nichtverschwindenden Strangeness-Beitrag zu den Formfaktoren gibt. Bei den Messungen mit Transversalpolarisation wurden die azimutalen Asymmetrien zu A=(-8.51 +- 2.31stat +-0.89syst)ppm bei E=855 MeV und Q^2=0.23(GeV/c)^2 und zu A=(-8.59 +- 0.89stat +- 0.83syst)ppm bei E=569 MeV und Q^2=0.11(GeV/c)^2 bestimmt. Die Größe der gemessenen Asymmetrien belegt, daß beim Zwei-Photon-Austausch neben dem Grundzustand des Protons vor allem auch angeregte Zwischenzustände einen wesentlichen Beitrag liefern.
Resumo:
An extensive sample (2%) of private vehicles in Italy are equipped with a GPS device that periodically measures their position and dynamical state for insurance purposes. Having access to this type of data allows to develop theoretical and practical applications of great interest: the real-time reconstruction of traffic state in a certain region, the development of accurate models of vehicle dynamics, the study of the cognitive dynamics of drivers. In order for these applications to be possible, we first need to develop the ability to reconstruct the paths taken by vehicles on the road network from the raw GPS data. In fact, these data are affected by positioning errors and they are often very distanced from each other (~2 Km). For these reasons, the task of path identification is not straightforward. This thesis describes the approach we followed to reliably identify vehicle paths from this kind of low-sampling data. The problem of matching data with roads is solved with a bayesian approach of maximum likelihood. While the identification of the path taken between two consecutive GPS measures is performed with a specifically developed optimal routing algorithm, based on A* algorithm. The procedure was applied on an off-line urban data sample and proved to be robust and accurate. Future developments will extend the procedure to real-time execution and nation-wide coverage.
Resumo:
Die diffusionsgewichtete Magnetresonanztomographie (MRT) mit dem hyperpolarisierten Edelgas-Isotop 3He ist ein neues Verfahren zur Untersuchung von Erkrankungen der Atem-wege und der Lunge. Die Diffusionsbewegung der 3He-Atome in den Luftwegen der Lunge wird durch deren Wände begrenzt, wobei diese Einschränkung sowohl von den Dimensionen der Atemwege als auch von den Messparametern abhängt. Man misst daher einen scheinbaren Diffusionskoeffizienten (Apparent Diffusion Coefficient, ADC) der kleiner ist als der Diffusionskoeffizient bei freier Diffusion. Der ADC gestattet somit eine qualitative Abschät-zung der Größe der Luftwege und deren krankhafte Veränderung, ohne eine direkte Abbil-dung der Luftwege selbst. Eine dreidimensionale Abbildung der räumlichen Verteilung von Lungenschädigungen wird dadurch möglich. Ziel der vorliegenden Arbeit war es, ein tieferes physikalisch fundiertes Verständnis der 3He-Diffusionsmessung zu ermöglichen und die Methode der diffusionsgewichteten 3He-MRT hin zur Erfassung des kompletten 3He-Diffusionstensors weiterzuentwickeln. Dazu wurde systematisch im Rahmen von Phantom- und tierexperimentellen Studien sowie Patientenmes-sungen untersucht, inwieweit unterschiedliche Einflussfaktoren das Ergebnis der ADC-Messung beeinflussen. So konnte beispielsweise nachgewiesen werden, dass residuale Luftströmungen am Ende der Einatmung keinen Einfluss auf den ADC-Wert haben. Durch Simulationsrechnungen konnte gezeigt werden, in welchem Maße sich die durch den Anregungspuls hervorgerufene Abnah-me der Polarisation des 3He-Gases auf den gemessenen ADC-Wert auswirkt. In einer Studie an lungengesunden Probanden und Patienten konnte die Wiederholbarkeit der ADC-Messung untersucht werden, aber auch der Einfluss von Gravitationseffekten. Diese Ergebnisse ermöglichen genauere Angaben über systematische und statistische Messfehler, sowie über Grenzwerte zwischen normalem und krankhaft verändertem Lungengewebe. Im Rahmen dieser Arbeit wurde die bestehende diffusionsgewichtete Bildgebung methodisch zur Erfassung des kompletten Diffusionstensors von 3He in der Lunge weiterentwickelt. Dies war wichtig, da entlang der Luftwege weitestgehend freie Diffusion vorherrscht, während senkrecht zu den Luftwegen die Diffusion eingeschränkt ist. Mit Hilfe von Simulationsrech-nungen wurde der kritische Einfluss von Rauschen in den MRT-Bildern auf die Qualität der Messergebnisse untersucht. Diese neue Methodik wurde zunächst an einem Phantom beste-hend aus einem Bündel aus Glaskapillaren, deren innerer Durchmesser mit dem des mensch-lichen Azinus übereinstimmt, validiert. Es ergab sich eine gute Übereinstimmung zwischen theoretischen Berechnungen und experimentellen Ergebnissen. In ersten Messungen am Menschen konnten so unterschiedliche Anisotropiewerte zwischen lungengesunden Proban-den und Patienten gefunden werden. Es zeigte sich eine Tendenz zu isotroper Diffusion bei Patienten mit einem Lungenemphysem. Zusammenfassend tragen die Ergebnisse der vorliegenden Arbeit zu einem besseren Ver-ständnis der ADC-Messmethode bei und helfen zukünftige Studien aufgrund des tieferen Verständnisses der die 3He Messung beeinflussenden Faktoren besser zu planen.
Resumo:
La mia tesi si riallaccia al dibattito teorico-letterario contemporaneo sulla possibilità di un approccio cognitivo alla narrativa e alla letteratura in particolare. Essa si propone di esplorare il rapporto tra narrazione ed esperienza, ridefinendo il concetto di “esperienzialità” della narrativa introdotto da Monika Fludernik nel suo Towards a “Natural” Narratology (1996). A differenza di Fludernik, che ha identificato l’esperienzialità con la rappresentazione dell’esperienza dei personaggi, la mia trattazione assegna un ruolo di primo piano al lettore, cercando di rispondere alla domanda: perché leggere una storia è – o si costituisce come – un’esperienza? L’intuizione dietro tutto ciò è che le teorizzazioni dell’esperienza e della coscienza nella filosofia della mente degli ultimi venti anni possano gettare luce sull’interazione tra lettori e testi narrativi. Il mio punto di riferimento principale è la scienza cognitiva “di seconda generazione”, secondo cui l’esperienza è un relazionarsi attivo e corporeo al mondo. La prima parte del mio studio è dedicata all’intreccio tra la narrativa e quello che chiamo lo “sfondo esperienziale” di ogni lettore, un repertorio di esperienze già note ai lettori attraverso ripetute interazioni con il mondo fisico e socio-culturale. Mi soffermo inoltre sul modo in cui relazionarsi a un testo narrativo può causare cambiamenti e slittamenti in questo sfondo esperienziale, incidendo sulla visione del mondo del lettore. Mi rivolgo poi al coinvolgimento corporeo del lettore, mostrando che la narrativa può attingere allo sfondo esperienziale dei suoi fruitori anche sul piano dell’esperienza di base: le simulazioni corporee della percezione contribuiscono alla nostra comprensione delle storie, incidendo sia sulla ricostruzione dello spazio dell’ambientazione sia sulla relazione intersoggettiva tra lettori e personaggi. Infine, mi occupo del rapporto tra l’esperienza della lettura e la pratica critico-letteraria dell’interpretazione, sostenendo che – lungi dal costituire due modalità opposte di fruizione dei testi – esse sono intimamente connesse.
Resumo:
Il seguente studio propone l'esame di una cronaca veneziana del XVI secolo, inedita, dalle origini al 1538/39, che parte della tradizione manoscritta attribuisce al Patriarca di Venezia Giovanni Tiepolo (1619-1631), parte ad Agostino degli Agostini, (1530-1574), un patrizio veneziano il cui nome è legato essenzialmente ad una cronaca dal 421 al 1570. Indipendentemente da chi sia il primitivo autore, la cronaca, di discreto pregio per la storia interna e il funzionamento delle istituzioni veneziane, presenta elementi di spiccata originalità dal punto di vista compositivo e formale che la pongono in una prospettiva storiografica alternativa al dualismo tra la storiografia ufficiale promossa per pubblico decreto e l'iniziativa privata dei diaria del XV-XVI. La cronaca veneziana, abbandonata per formule più sofisticate e innovative di diffusione dell'informazione pubblica, sopravvive, formalmente immutata nella sua arcaicità, rinnovandosi nella tendenza a creare compendi ricchi di documenti e di elenchi, destinati ad aiutare la nobiltà ad orientarsi nel mondo socio-politico contemporaneo. Si consuma così il divorzio fra l'informazione tecnico-politica utile al patriziato nello svolgimento del suo lavoro e la storiografia pubblica che, di fronte alle varie esigenze e ai diversi generi letterari, sceglie ideologicamente di abbracciare il genere delle laus civitatis e della storiografia laudativa ed encomiastica. In questo contesto si inserisce la Cronaca esemplata dal Patriarca Giovanni Tiepolo, chiaro esempio di un tentativo di razionalizzazione dell'informazione in cui le notizie e gli elementi non ritenuti immediatamente utili come le lunghe liste dei 41 elettori, le promissioni ducali, nonchè singoli episodi ed eventi trattati, trovano una collocazione esterna alla cronaca, in quello che Reines definisce l'ormai nascente archivio politico del XVI secolo.
Resumo:
The thesis contemplates 4 papers and its main goal is to provide evidence on the prominent impact that behavioral analysis can play into the personnel economics domain.The research tool prevalently used in the thesis is the experimental analysis.The first paper provide laboratory evidence on how the standard screening model–based on the assumption that the pecuniary dimension represents the main workers’choice variable–fails when intrinsic motivation is introduced into the analysis.The second paper explores workers’ behavioral reactions when dealing with supervisors that may incur in errors in the assessment of their job performance.In particular,deserving agents that have exerted high effort may not be rewarded(Type-I errors)and undeserving agents that have exerted low effort may be rewarded(Type-II errors).Although a standard neoclassical model predicts both errors to be equally detrimental for effort provision,this prediction fails when tested through a laboratory experiment.Findings from this study suggest how failing to reward deserving agents is significantly more detrimental than rewarding undeserving agents.The third paper investigates the performance of two antithetic non-monetary incentive schemes on schooling achievement.The study is conducted through a field experiment.Students randomized to the main treatments have been incentivized to cooperate or to compete in order to earn additional exam points.Consistently with the theoretical model proposed in the paper,the level of effort in the competitive scheme proved to be higher than in the cooperative setting.Interestingly however,this result is characterized by a strong gender effect.The fourth paper exploits a natural experiment setting generated by the credit crunch occurred in the UK in the2007.The economic turmoil has negatively influenced the private sector,while public sector employees have not been directly hit by the crisis.This shock–through the rise of the unemployment rate and the increasing labor market uncertainty–has generated an exogenous variation in the opportunity cost of maternity leave in private sector labor force.This paper identifies the different responses.
Resumo:
La ricerca verte sull'osservazione di alcune specifiche dinamiche archetipiche rilevabili all’interno dell'inconscio collettivo di fine Ottocento e della profonda influenza che queste ebbero tanto sulla cultura e sulla società ispano-americana del tempo, quanto sulla specifica corrente letteraria modernista. L’archetipo di cui si analizza la riemersione letteraria è quello della Grande Madre, come teorizzato da C. G. Jung e perfezionato con i successivi studi di Erich Neumann. Avvalendosi, in particolare, delle riflessioni di quest'ultimo e spaziando fino ad includere contributi psicoanalitici e studi simbolici successivi (in particolare quelli di James Hillman, Gaston Bachelard e Gilbert Durand) si evidenzia la dominanza archetipica della Grande Madre all'interno del Modernismo ispano-americano, intesa tanto in senso transpersonale (cioè come rappresentazione dell'inconscio) quanto in senso più specificamente rappresentativo del Femminile. Si applica, infine, il vaglio della critica archetipica alle opere di Delmira Agustini, Alfonsina Storni e Juana de Ibarbourou, dirigendo, in particolar modo, l'analisi alla rappresentazione letteraria degli aspetti di questo archetipo identificati come ‘negativi’, e, quindi, più duramente sottoposti a rimozione nel corso dei secoli.
Resumo:
Die vorliegende Dissertation entstand im Rahmen eines multizentrischen EU-geförderten Projektes, das die Anwendungsmöglichkeiten von Einzelnukleotid-Polymorphismen (SNPs) zur Individualisierung von Personen im Kontext der Zuordnung von biologischen Tatortspuren oder auch bei der Identifizierung unbekannter Toter behandelt. Die übergeordnete Zielsetzung des Projektes bestand darin, hochauflösende Genotypisierungsmethoden zu etablieren und zu validieren, die mit hoher Genauigkeit aber geringen Aufwand SNPs im Multiplexformat simultan analysieren können. Zunächst wurden 29 Y-chromosomale und 52 autosomale SNPs unter der Anforderung ausgewählt, dass sie als Multiplex eine möglichst hohe Individualisierungschance aufweisen. Anschließend folgten die Validierungen beider Multiplex-Systeme und der SNaPshot™-Minisequenzierungsmethode in systematischen Studien unter Beteiligung aller Arbeitsgruppen des Projektes. Die validierte Referenzmethode auf der Basis einer Minisequenzierung diente einerseits für die kontrollierte Zusammenarbeit unterschiedlicher Laboratorien und andererseits als Grundlage für die Entwicklung eines Assays zur SNP-Genotypisierung mittels der elektronischen Microarray-Technologie in dieser Arbeit. Der eigenständige Hauptteil dieser Dissertation beschreibt unter Verwendung der zuvor validierten autosomalen SNPs die Neuentwicklung und Validierung eines Hybridisierungsassays für die elektronische Microarray-Plattform der Firma Nanogen Dazu wurden im Vorfeld drei verschiedene Assays etabliert, die sich im Funktionsprinzip auf dem Microarray unterscheiden. Davon wurde leistungsorientiert das Capture down-Assay zur Weiterentwicklung ausgewählt. Nach zahlreichen Optimierungsmaßnahmen hinsichtlich PCR-Produktbehandlung, gerätespezifischer Abläufe und analysespezifischer Oligonukleotiddesigns stand das Capture down-Assay zur simultanen Typisierung von drei Individuen mit je 32 SNPs auf einem Microarray bereit. Anschließend wurde dieses Verfahren anhand von 40 DNA-Proben mit bekannten Genotypen für die 32 SNPs validiert und durch parallele SNaPshot™-Typisierung die Genauigkeit bestimmt. Das Ergebnis beweist nicht nur die Eignung des validierten Analyseassays und der elektronischen Microarray-Technologie für bestimmte Fragestellungen, sondern zeigt auch deren Vorteile in Bezug auf Schnelligkeit, Flexibilität und Effizienz. Die Automatisierung, welche die räumliche Anordnung der zu untersuchenden Fragmente unmittelbar vor der Analyse ermöglicht, reduziert unnötige Arbeitsschritte und damit die Fehlerhäufigkeit und Kontaminationsgefahr bei verbesserter Zeiteffizienz. Mit einer maximal erreichten Genauigkeit von 94% kann die Zuverlässigkeit der in der forensischen Genetik aktuell eingesetzten STR-Systeme jedoch noch nicht erreicht werden. Die Rolle des neuen Verfahrens wird damit nicht in einer Ablösung der etablierten Methoden, sondern in einer Ergänzung zur Lösung spezieller Probleme wie z.B. der Untersuchung stark degradierter DNA-Spuren zu finden sein.
Resumo:
Kalorimetrische Tieftemperatur-Detektoren (Calorimetric Low Temperature Detectors, CLTDs) wurden erstmals in Messungen zur Bestimmung des spezifischen Energieverlustes (dE/dx) niederenergetischer Schwerionen beim Durchgang durch Materie eingesetzt. Die Messungen wurden im Energiebereich unterhalb des Bragg-Peaks, mit 0.1 - 1.4 MeV/u 238U-Ionen in Kohlenstoff und Gold sowie mit 0.05 - 1.0 MeV/u 131Xe-Ionen in Kohlenstoff, Nickel und Gold, durchgeführt. Die Kombination der CLTDs mit einem Flugzeitdetektor ermöglichte dabei, kontinuierliche dE/dx-Kurven über größere Energiebereiche hinweg simultan zu bestimmen. Im Vergleich zu herkömmlichen Meßsystemen, die Ionisationsdetektoren zur Energiemessung verwenden, erlaubten die höhere Energieauflösung und -linearität der CLTDs eine Verringerung der Kalibrierungsfehler sowie eine Erweiterung des zugänglichen Energiebereiches der dE/dx-Messungen in Richtung niedriger Energien. Die gewonnen Daten können zur Anpassung theoretischer und semi-empirischer Modelle und somit zu einer Erhöhung der Präzision bei der Vorhersage spezifischer Energieverluste schwerer Ionen beitragen. Neben der experimentellen Bestimmung neuer Daten wurden das alternative Detektionsprinzip der CLTDs, die Vorteile dieser Detektoren bezüglich Energieauflösung und -linearität sowie der modulare Aufbau des CLTD-Arrays aus mehreren Einzeldetektoren genutzt, um diese Art von Messung auf potentielle systematische Unsicherheiten zu untersuchen. Unter anderem wurden hierbei unerwartete Channeling-Effekte beim Durchgang der Ionen durch dünne polykristalline Absorberfolien beobachtet. Die koinzidenten Energie- und Flugzeitmessungen (E-ToF) wurden weiterhin genutzt, um das Auflösungsvermögen des Detektor-Systems bei der direkten in-flight Massenbestimmung langsamer und sehr schwerer Ionen zu bestimmen. Durch die exzellente Energieauflösung der CLTDs konnten hierbei Massenauflösungen von Delta-m(FWHM) = 1.3 - 2.5 u für 0.1 - 0.6 MeV/u 238U-Ionen erreicht werden. In einer E-ToF-Messung mit Ionisationsdetektoren sind solche Werte in diesem Energie- und Massenbereich aufgrund der Limitierung der Energieauflösung durch statistische Schwankungen von Verlustprozessen beim Teilchennachweis nicht erreichbar.
Resumo:
Purpose The accuracy, efficiency, and efficacy of four commonly recommended medication safety assessment methodologies were systematically reviewed. Methods Medical literature databases were systematically searched for any comparative study conducted between January 2000 and October 2009 in which at least two of the four methodologies—incident report review, direct observation, chart review, and trigger tool—were compared with one another. Any study that compared two or more methodologies for quantitative accuracy (adequacy of the assessment of medication errors and adverse drug events) efficiency (effort and cost), and efficacy and that provided numerical data was included in the analysis. Results Twenty-eight studies were included in this review. Of these, 22 compared two of the methodologies, and 6 compared three methods. Direct observation identified the greatest number of reports of drug-related problems (DRPs), while incident report review identified the fewest. However, incident report review generally showed a higher specificity compared to the other methods and most effectively captured severe DRPs. In contrast, the sensitivity of incident report review was lower when compared with trigger tool. While trigger tool was the least labor-intensive of the four methodologies, incident report review appeared to be the least expensive, but only when linked with concomitant automated reporting systems and targeted follow-up. Conclusion All four medication safety assessment techniques—incident report review, chart review, direct observation, and trigger tool—have different strengths and weaknesses. Overlap between different methods in identifying DRPs is minimal. While trigger tool appeared to be the most effective and labor-efficient method, incident report review best identified high-severity DRPs.