963 resultados para atk-ohjelmat - LSP - Library software package
Resumo:
Tiefherd-Beben, die im oberen Erdmantel in einer Tiefe von ca. 400 km auftreten, werden gewöhnlich mit dem in gleicher Tiefe auftretenden druckabhängigen, polymorphen Phasenübergang von Olivine (α-Phase) zu Spinel (β-Phase) in Verbindung gebracht. Es ist jedoch nach wie vor unklar, wie der Phasenübergang mit dem mechanischen Versagen des Mantelmaterials zusammenhängt. Zur Zeit werden im Wesentlichen zwei Modelle diskutiert, die entweder Mikrostrukturen, die durch den Phasenübergang entstehen, oder aber die rheologischen Veränderungen des Mantelgesteins durch den Phasenübergang dafür verantwortlich machen. Dabei sind Untersuchungen der Olivin→Spinel Umwandlung durch die Unzugänglichkeit des natürlichen Materials vollständig auf theoretische Überlegungen sowie Hochdruck-Experimente und Numerische Simulationen beschränkt. Das zentrale Thema dieser Dissertation war es, ein funktionierendes Computermodell zur Simulation der Mikrostrukturen zu entwickeln, die durch den Phasenübergang entstehen. Des Weiteren wurde das Computer Modell angewandt um die mikrostrukturelle Entwicklung von Spinelkörnern und die Kontrollparameter zu untersuchen. Die Arbeit ist daher in zwei Teile unterteilt: Der erste Teil (Kap. 2 und 3) behandelt die physikalischen Gesetzmäßigkeiten und die prinzipielle Funktionsweise des Computer Modells, das auf der Kombination von Gleichungen zur Errechnung der kinetischen Reaktionsgeschwindigkeit mit Gesetzen der Nichtgleichgewichtsthermodynamik unter nicht-hydostatischen Bedingungen beruht. Das Computermodell erweitert ein Federnetzwerk der Software latte aus dem Programmpaket elle. Der wichtigste Parameter ist dabei die Normalspannung auf der Kornoberfläche von Spinel. Darüber hinaus berücksichtigt das Programm die Latenzwärme der Reaktion, die Oberflächenenergie und die geringe Viskosität von Mantelmaterial als weitere wesentliche Parameter in der Berechnung der Reaktionskinetic. Das Wachstumsverhalten und die fraktale Dimension von errechneten Spinelkörnern ist dabei in guter Übereinstimmung mit Spinelstrukturen aus Hochdruckexperimenten. Im zweiten Teil der Arbeit wird das Computermodell angewandt, um die Entwicklung der Oberflächenstruktur von Spinelkörnern unter verschiedenen Bedigungen zu eruieren. Die sogenannte ’anticrack theory of faulting’, die den katastrophalen Verlauf der Olivine→Spinel Umwandlung in olivinhaltigem Material unter differentieller Spannung durch Spannungskonzentrationen erklärt, wurde anhand des Computermodells untersucht. Der entsprechende Mechanismus konnte dabei nicht bestätigt werden. Stattdessen können Oberflächenstrukturen, die Ähnlichkeiten zu Anticracks aufweisen, durch Unreinheiten des Materials erklärt werden (Kap. 4). Eine Reihe von Simulationen wurde der Herleitung der wichtigsten Kontrollparameter der Reaktion in monomineralischem Olivin gewidmet (Kap. 5 and Kap. 6). Als wichtigste Einflüsse auf die Kornform von Spinel stellten sich dabei die Hauptnormalspannungen auf dem System sowie Heterogenitäten im Wirtsminerals und die Viskosität heraus. Im weiteren Verlauf wurden die Nukleierung und das Wachstum von Spinel in polymineralischen Mineralparagenesen untersucht (Kap. 7). Die Reaktionsgeschwindigkeit der Olivine→Spinel Umwandlung und die Entwicklung von Spinelnetzwerken und Clustern wird durch die Gegenwart nicht-reaktiver Minerale wie Granat oder Pyroxen erheblich beschleunigt. Die Bildung von Spinelnetzwerken hat das Potential, die mechanischen Eigenschaften von Mantelgestein erheblich zu beeinflussen, sei es durch die Bildung potentieller Scherzonen oder durch Gerüstbildung. Dieser Lokalisierungprozess des Spinelwachstums in Mantelgesteinen kann daher ein neues Erklärungsmuster für Tiefbeben darstellen.
Resumo:
Membrane proteins play a major role in every living cell. They are the key factors in the cell’s metabolism and in other functions, for example in cell-cell interaction, signal transduction, and transport of ions and nutrients. Cytochrome c oxidase (CcO), as one of the membrane proteins of the respiratory chain, plays a significant role in the energy transformation of higher organisms. CcO is a multi centered heme protein, utilizing redox energy to actively transport protons across the mitochondrial membrane. One aim of this dissertation is to investigate single steps in the mechanism of the ion transfer process coupled to electron transfer, which are not fully understood. The protein-tethered bilayer lipid membrane is a general approach to immobilize membrane proteins in an oriented fashion on a planar electrode embedded in a biomimetic membrane. This system enables the combination of electrochemical techniques with surface enhanced resonance Raman (SERRS), surface enhanced reflection absorption infrared (SEIRAS), and surface plasmon spectroscopy to study protein mediated electron and ion transport processes. The orientation of the enzymes within the surface confined architecture can be controlled by specific site-mutations, i.e. the insertion of a poly-histidine tag to different subunits of the enzyme. CcO can, thus, be oriented uniformly with its natural electron pathway entry pointing either towards or away from the electrode surface. The first orientation allows an ultra-fast direct electron transfer(ET) into the protein, not provided by conventional systems, which can be leveraged to study intrinsic charge transfer processes. The second orientation permits to study the interaction with its natural electron donor cytochrome c. Electrochemical and SERR measurements show conclusively that the redox site structure and the activity of the surface confined enzyme are preserved. Therefore, this biomimetic system offers a unique platform to study the kinetics of the ET processes in order to clarify mechanistic properties of the enzyme. Highly sensitive and ultra fast electrochemical techniques allow the separation of ET steps between all four redox centres including the determination of ET rates. Furthermore, proton transfer coupled to ET could be directly measured and discriminated from other ion transfer processes, revealing novel mechanistic information of the proton transfer mechanism of cytochrome c oxidase. In order to study the kinetics of the ET inside the protein, including the catalytic center, time resolved SEIRAS and SERRS measurements were performed to gain more insight into the structural and coordination changes of the heme environment. The electrical behaviour of tethered membrane systems and membrane intrinsic proteins as well as related charge transfer processes were simulated by solving the respective sets of differential equations, utilizing a software package called SPICE. This helps to understand charge transfer processes across membranes and to develop models that can help to elucidate mechanisms of complex enzymatic processes.
Resumo:
Obwohl die funktionelle Magnetresonanztomographie (fMRI) interiktaler Spikes mit simultaner EEG-Ableitung bei Patienten mit fokalen Anfallsleiden seit einigen Jahren zur Lokalisation beteiligter Hirnstrukturen untersucht wird, ist sie nach wie vor eine experimentelle Methode. Um zuverlässig Ergebnisse zu erhalten, ist insbesondere die Verbesserung des Signal-zu-Rausch-Verhältnisses in der statistischen Bilddatenauswertung von Bedeutung. Frühere Untersuchungen zur sog. event-related fMRI weisen auf einen Zusammenhang zwischen Häufigkeit von Einzelreizen und nachfolgender hämodynamischer Signalantwort in der fMRI hin. Um einen möglichen Einfluss der Häufigkeit interiktaler Spikes auf die Signalantwort nachzuweisen, wurden 20 Kinder mit fokaler Epilepsie mit der EEG-fMRI untersucht. Von 11 dieser Patienten konnten die Daten ausgewertet werden. In einer zweifachen Analyse mit dem Softwarepaket SPM99 wurden die Bilddaten zuerst ausschließlich je nach Auftreten interiktaler Spikes der „Reiz“- oder „Ruhe“-Bedingung zugeordnet, unabhängig von der jeweiligen Anzahl der Spikes je Messzeitpunkt (on/off-Analyse). In einem zweiten Schritt wurden die „Reiz“- Bedingungen auch differenziert nach jeweiliger Anzahl einzelner Spikes ausgewertet (häufigkeitskorrelierte Analyse). Die Ergebnisse dieser Analysen zeigten bei 5 der 11 Patienten eine Zunahme von Sensitivität und Signifikanzen der in der fMRI nachgewiesenen Aktivierungen. Eine höhere Spezifität konnte hingegen nicht gezeigt werden. Diese Ergebnisse weisen auf eine positive Korrelation von Reizhäufigkeit und nachfolgender hämodynamischer Antwort auch bei interiktalen Spikes hin, welche für die EEG-fMRI nutzbar ist. Bei 6 Patienten konnte keine fMRI-Aktivierung nachgewiesen werden. Mögliche technische und physiologische Ursachen hierfür werden diskutiert.
Resumo:
Das am Südpol gelegene Neutrinoteleskop IceCube detektiert hochenergetische Neutrinos über die schwache Wechselwirkung geladener und neutraler Ströme. Die Analyse basiert auf einem Vergleich mit Monte-Carlo-Simulationen, deren Produktion global koordiniert wird. In Mainz ist es erstmalig gelungen, Simulationen innerhalb der Architektur des Worldwide LHC Computing Grid (WLCG) zu realisieren, was die Möglichkeit eröffnet, Monte-Carlo-Berechnungen auch auf andere deutsche Rechnerfarmen (CEs) mit IceCube-Berechtigung zu verteilen. Atmosphärische Myonen werden mit einer Rate von über 1000 Ereignissen pro Sekunde aufgezeichnet. Eine korrekte Interpretation dieses dominanten Signals, welches um einen Faktor von 10^6 reduziert werden muss um das eigentliche Neutrinosignal zu extrahieren, ist deswegen von großer Bedeutung. Eigene Simulationen mit der Software-Umgebung CORSIKA wurden durchgeführt um die von Energie und Einfallswinkel abhängige Entstehungshöhe atmosphärischer Myonen zu bestimmen. IceCube Myonraten wurden mit Wetterdaten des European Centre for Medium-Range Weather Forcasts (ECMWF) verglichen und Korrelationen zwischen jahreszeitlichen sowie kurzzeitigen Schwankungen der Atmosphärentemperatur und Myonraten konnten nachgewiesen werden. Zudem wurde eine Suche nach periodischen Effekten in der Atmosphäre, verursacht durch z.B. meteorologische Schwerewellen, mit Hilfe einer Fourieranalyse anhand der IceCube-Daten durchgeführt. Bislang konnte kein signifikanter Nachweis zur Existenz von Schwerewellen am Südpol erbracht werden.
Resumo:
Redshift Space Distortions (RSD) are an apparent anisotropy in the distribution of galaxies due to their peculiar motion. These features are imprinted in the correlation function of galaxies, which describes how these structures distribute around each other. RSD can be represented by a distortions parameter $\beta$, which is strictly related to the growth of cosmic structures. For this reason, measurements of RSD can be exploited to give constraints on the cosmological parameters, such us for example the neutrino mass. Neutrinos are neutral subatomic particles that come with three flavours, the electron, the muon and the tau neutrino. Their mass differences can be measured in the oscillation experiments. Information on the absolute scale of neutrino mass can come from cosmology, since neutrinos leave a characteristic imprint on the large scale structure of the universe. The aim of this thesis is to provide constraints on the accuracy with which neutrino mass can be estimated when expoiting measurements of RSD. In particular we want to describe how the error on the neutrino mass estimate depends on three fundamental parameters of a galaxy redshift survey: the density of the catalogue, the bias of the sample considered and the volume observed. In doing this we make use of the BASICC Simulation from which we extract a series of dark matter halo catalogues, characterized by different value of bias, density and volume. This mock data are analysed via a Markov Chain Monte Carlo procedure, in order to estimate the neutrino mass fraction, using the software package CosmoMC, which has been conveniently modified. In this way we are able to extract a fitting formula describing our measurements, which can be used to forecast the precision reachable in future surveys like Euclid, using this kind of observations.
Resumo:
Moderne ESI-LC-MS/MS-Techniken erlauben in Verbindung mit Bottom-up-Ansätzen eine qualitative und quantitative Charakterisierung mehrerer tausend Proteine in einem einzigen Experiment. Für die labelfreie Proteinquantifizierung eignen sich besonders datenunabhängige Akquisitionsmethoden wie MSE und die IMS-Varianten HDMSE und UDMSE. Durch ihre hohe Komplexität stellen die so erfassten Daten besondere Anforderungen an die Analysesoftware. Eine quantitative Analyse der MSE/HDMSE/UDMSE-Daten blieb bislang wenigen kommerziellen Lösungen vorbehalten. rn| In der vorliegenden Arbeit wurden eine Strategie und eine Reihe neuer Methoden zur messungsübergreifenden, quantitativen Analyse labelfreier MSE/HDMSE/UDMSE-Daten entwickelt und als Software ISOQuant implementiert. Für die ersten Schritte der Datenanalyse (Featuredetektion, Peptid- und Proteinidentifikation) wird die kommerzielle Software PLGS verwendet. Anschließend werden die unabhängigen PLGS-Ergebnisse aller Messungen eines Experiments in einer relationalen Datenbank zusammengeführt und mit Hilfe der dedizierten Algorithmen (Retentionszeitalignment, Feature-Clustering, multidimensionale Normalisierung der Intensitäten, mehrstufige Datenfilterung, Proteininferenz, Umverteilung der Intensitäten geteilter Peptide, Proteinquantifizierung) überarbeitet. Durch diese Nachbearbeitung wird die Reproduzierbarkeit der qualitativen und quantitativen Ergebnisse signifikant gesteigert.rn| Um die Performance der quantitativen Datenanalyse zu evaluieren und mit anderen Lösungen zu vergleichen, wurde ein Satz von exakt definierten Hybridproteom-Proben entwickelt. Die Proben wurden mit den Methoden MSE und UDMSE erfasst, mit Progenesis QIP, synapter und ISOQuant analysiert und verglichen. Im Gegensatz zu synapter und Progenesis QIP konnte ISOQuant sowohl eine hohe Reproduzierbarkeit der Proteinidentifikation als auch eine hohe Präzision und Richtigkeit der Proteinquantifizierung erreichen.rn| Schlussfolgernd ermöglichen die vorgestellten Algorithmen und der Analyseworkflow zuverlässige und reproduzierbare quantitative Datenanalysen. Mit der Software ISOQuant wurde ein einfaches und effizientes Werkzeug für routinemäßige Hochdurchsatzanalysen labelfreier MSE/HDMSE/UDMSE-Daten entwickelt. Mit den Hybridproteom-Proben und den Bewertungsmetriken wurde ein umfassendes System zur Evaluierung quantitativer Akquisitions- und Datenanalysesysteme vorgestellt.
Resumo:
Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.
Uso di 3d slicer in ambito di ricerca clinica: Una revisione critica delle esperienze di riferimento
Resumo:
Negli ultimi 20 anni il progresso tecnologico ha segnato un profondo cambiamento in svariati ambiti tra i quali quello della Sanità in cui hanno preso vita apparecchiature diagnostiche, cosiddette “digitali native”, come la Tomografia Computerizzata (TC), la Tomografia ad Emissione di Positroni (PET), la Risonanza Magnetica Nucleare (RMN), l’Ecografia. A differenza delle diagnostiche tradizionali, come ad esempio la Radiologia convenzionale, che forniscono come risultato di un esame un’immagine bidimensionale ricavata dalla semplice proiezione di una struttura anatomica indagata, questi nuovi sistemi sono in grado di generare scansioni tomografiche. Disporre di immagini digitali contenenti dati tridimensionali rappresenta un enorme passo in avanti per l’indagine diagnostica, ma per poterne estrapolare e sfruttare i preziosi contenuti informativi occorrono i giusti strumenti che, data la natura delle acquisizioni, vanno ricercati nel mondo dell’Informatica. A tal proposito il seguente elaborato si propone di presentare un software package per la visualizzazione, l’analisi e l’elaborazione di medical images chiamato 3D Slicer che rappresenta un potente strumento di cui potersi avvalere in differenti contesti medici. Nel primo capitolo verrà proposta un’introduzione al programma; Seguirà il secondo capitolo con una trattazione più tecnica in cui verranno approfondite alcune funzionalità basilari del software e altre più specifiche; Infine nel terzo capitolo verrà preso in esame un intervento di endoprotesica vascolare e come grazie al supporto di innovativi sistemi di navigazione chirurgica sia possibile avvalersi di 3D Slicer anche in ambiente intraoperatorio
Resumo:
This thesis is aimed to assess similarities and mismatches between the outputs from two independent methods for the cloud cover quantification and classification based on quite different physical basis. One of them is the SAFNWC software package designed to process radiance data acquired by the SEVIRI sensor in the VIS/IR. The other is the MWCC algorithm, which uses the brightness temperatures acquired by the AMSU-B and MHS sensors in their channels centered in the MW water vapour absorption band. At a first stage their cloud detection capability has been tested, by comparing the Cloud Masks they produced. These showed a good agreement between two methods, although some critical situations stand out. The MWCC, in effect, fails to reveal clouds which according to SAFNWC are fractional, cirrus, very low and high opaque clouds. In the second stage of the inter-comparison the pixels classified as cloudy according to both softwares have been. The overall observed tendency of the MWCC method, is an overestimation of the lower cloud classes. Viceversa, the more the cloud top height grows up, the more the MWCC not reveal a certain cloud portion, rather detected by means of the SAFNWC tool. This is what also emerges from a series of tests carried out by using the cloud top height information in order to evaluate the height ranges in which each MWCC category is defined. Therefore, although the involved methods intend to provide the same kind of information, in reality they return quite different details on the same atmospheric column. The SAFNWC retrieval being very sensitive to the top temperature of a cloud, brings the actual level reached by this. The MWCC, by exploiting the capability of the microwaves, is able to give an information about the levels that are located more deeply within the atmospheric column.
Resumo:
PURPOSE: To determine the reproducibility and validity of video screen measurement (VSM) of sagittal plane joint angles during gait. METHODS: 17 children with spastic cerebral palsy walked on a 10m walkway. Videos were recorded and 3d-instrumented gait analysis was performed. Two investigators measured six sagittal joint/segment angles (shank, ankle, knee, hip, pelvis, and trunk) using a custom-made software package. The intra- and interrater reproducibility were expressed by the intraclass correlation coefficient (ICC), standard error of measurements (SEM) and smallest detectable difference (SDD). The agreement between VSM and 3d joint angles was illustrated by Bland-Altman plots and limits of agreement (LoA). RESULTS: Regarding the intrarater reproducibility of VSM, the ICC ranged from 0.99 (shank) to 0.58 (trunk), the SEM from 0.81 degrees (shank) to 5.97 degrees (trunk) and the SDD from 1.80 degrees (shank) to 16.55 degrees (trunk). Regarding the interrater reproducibility, the ICC ranged from 0.99 (shank) to 0.48 (trunk), the SEM from 0.70 degrees (shank) to 6.78 degrees (trunk) and the SDD from 1.95 degrees (shank) to 18.8 degrees (trunk). The LoA between VSM and 3d data ranged from 0.4+/-13.4 degrees (knee extension stance) to 12.0+/-14.6 degrees (ankle dorsiflexion swing). CONCLUSION: When performed by the same observer, VSM mostly allows the detection of relevant changes after an intervention. However, VSM angles differ from 3d-IGA and do not reflect the real sagittal joint position, probably due to the additional movements in the other planes.
Resumo:
Several practical obstacles in data handling and evaluation complicate the use of quantitative localized magnetic resonance spectroscopy (qMRS) in clinical routine MR examinations. To overcome these obstacles, a clinically feasible MR pulse sequence protocol based on standard available MR pulse sequences for qMRS has been implemented along with newly added functionalities to the free software package jMRUI-v5.0 to make qMRS attractive for clinical routine. This enables (a) easy and fast DICOM data transfer from the MR console and the qMRS-computer, (b) visualization of combined MR spectroscopy and imaging, (c) creation and network transfer of spectroscopy reports in DICOM format, (d) integration of advanced water reference models for absolute quantification, and (e) setup of databases containing normal metabolite concentrations of healthy subjects. To demonstrate the work-flow of qMRS using these implementations, databases for normal metabolite concentration in different regions of brain tissue were created using spectroscopic data acquired in 55 normal subjects (age range 6-61 years) using 1.5T and 3T MR systems, and illustrated in one clinical case of typical brain tumor (primitive neuroectodermal tumor). The MR pulse sequence protocol and newly implemented software functionalities facilitate the incorporation of qMRS and reference to normal value metabolite concentration data in daily clinical routine. Magn Reson Med, 2013. © 2012 Wiley Periodicals, Inc.
Resumo:
11beta-Hydroxysteroid dehydrogenase (11beta-HSD) enzymes catalyze the conversion of biologically inactive 11-ketosteroids into their active 11beta-hydroxy derivatives and vice versa. Inhibition of 11beta-HSD1 has considerable therapeutic potential for glucocorticoid-associated diseases including obesity, diabetes, wound healing, and muscle atrophy. Because inhibition of related enzymes such as 11beta-HSD2 and 17beta-HSDs causes sodium retention and hypertension or interferes with sex steroid hormone metabolism, respectively, highly selective 11beta-HSD1 inhibitors are required for successful therapy. Here, we employed the software package Catalyst to develop ligand-based multifeature pharmacophore models for 11beta-HSD1 inhibitors. Virtual screening experiments and subsequent in vitro evaluation of promising hits revealed several selective inhibitors. Efficient inhibition of recombinant human 11beta-HSD1 in intact transfected cells as well as endogenous enzyme in mouse 3T3-L1 adipocytes and C2C12 myotubes was demonstrated for compound 27, which was able to block subsequent cortisol-dependent activation of glucocorticoid receptors with only minor direct effects on the receptor itself. Our results suggest that inhibitor-based pharmacophore models for 11beta-HSD1 in combination with suitable cell-based activity assays, including such for related enzymes, can be used for the identification of selective and potent inhibitors.
Resumo:
The electric utility business is an inherently dangerous area to work in with employees exposed to many potential hazards daily. One such hazard is an arc flash. An arc flash is a rapid release of energy, referred to as incident energy, caused by an electric arc. Due to the random nature and occurrence of an arc flash, one can only prepare and minimize the extent of harm to themself, other employees and damage to equipment due to such a violent event. Effective January 1, 2009 the National Electric Safety Code (NESC) requires that an arc-flash assessment be performed by companies whose employees work on or near energized equipment to determine the potential exposure to an electric arc. To comply with the NESC requirement, Minnesota Power’s (MP’s) current short circuit and relay coordination software package, ASPEN OneLinerTM and one of the first software packages to implement an arc-flash module, is used to conduct an arc-flash hazard analysis. At the same time, the package is benchmarked against equations provided in the IEEE Std. 1584-2002 and ultimately used to determine the incident energy levels on the MP transmission system. This report goes into the depth of the history of arc-flash hazards, analysis methods, both software and empirical derived equations, issues of concern with calculation methods and the work conducted at MP. This work also produced two offline software products to conduct and verify an offline arc-flash hazard analysis.
Resumo:
Making an accurate diagnosis is essential to ensure that a patient receives appropriate treatment and correct information regarding their prognosis. Characteristics of diagnostic tests are quantified in test accuracy studies, but many such studies have methodological flaws. The HSRC evidence-based diagnosis programme has focused on methods for systematic reviews of test accuracy studies, and the wider context in which tests are ordered and interpreted. We carried out a range of projects relating to literature searching, quality assessment, meta-analysis, presentation of results, and interactions between doctors and patients during the diagnostic process. We have shown that systematic reviews of test accuracy studies should search a range of databases and that current diagnostic filters do not have sufficient accuracy to be used in test accuracy reviews. Summary quality scores should not be used in test accuracy reviews; the Quality Assessment of Studies of Diagnostic Accuracy included in Systematic Reviews (QUADAS) tool for assessing test accuracy studies is acceptable for quality assessment. We have shown that the hierarchical summary receiver operating characteristic (HSROC) and bivariate models for meta-analysis of test accuracy are statistically equivalent in many circumstances, and have developed an add-on module for the statistical software package Stata that enables these statistically rigorous models to be fitted by those without expert statistical knowledge. Three areas that would benefit from further research are literature searching, synthesis of results from individual patient data and presentation of results.
Resumo:
In-cylinder pressure transducers have been used for decades to record combustion pressure inside a running engine. However, due to the extreme operating environment, transducer design and installation must be considered in order to minimize measurement error. One such error is caused by thermal shock, where the pressure transducer experiences a high heat flux that can distort the pressure transducer diaphragm and also change the crystal sensitivity. This research focused on investigating the effects of thermal shock on in-cylinder pressure transducer data quality using a 2.0L, four-cylinder, spark-ignited, direct-injected, turbo-charged GM engine. Cylinder four was modified with five ports to accommodate pressure transducers of different manufacturers. They included an AVL GH14D, an AVL GH15D, a Kistler 6125C, and a Kistler 6054AR. The GH14D, GH15D, and 6054AR were M5 size transducers. The 6125C was a larger, 6.2mm transducer. Note that both of the AVL pressure transducers utilized a PH03 flame arrestor. Sweeps of ignition timing (spark sweep), engine speed, and engine load were performed to study the effects of thermal shock on each pressure transducer. The project consisted of two distinct phases which included experimental engine testing as well as simulation using a commercially available software package. A comparison was performed to characterize the quality of the data between the actual cylinder pressure and the simulated results. This comparison was valuable because the simulation results did not include thermal shock effects. All three sets of tests showed the peak cylinder pressure was basically unaffected by thermal shock. Comparison of the experimental data with the simulated results showed very good correlation. The spark sweep was performed at 1300 RPM and 3.3 bar NMEP and showed that the differences between the simulated results (no thermal shock) and the experimental data for the indicated mean effective pressure (IMEP) and the pumping mean effective pressure (PMEP) were significantly less than the published accuracies. All transducers had an IMEP percent difference less than 0.038% and less than 0.32% for PMEP. Kistler and AVL publish that the accuracy of their pressure transducers are within plus or minus 1% for the IMEP (AVL 2011; Kistler 2011). In addition, the difference in average exhaust absolute pressure between the simulated results and experimental data was the greatest for the two Kistler pressure transducers. The location and lack of flame arrestor are believed to be the cause of the increased error. For the engine speed sweep, the torque output was held constant at 203 Nm (150 ft-lbf) from 1500 to 4000 RPM. The difference in IMEP was less than 0.01% and the PMEP was less than 1%, except for the AVL GH14D which was 5% and the AVL GH15DK which was 2.25%. A noticeable error in PMEP appeared as the load increased during the engine speed sweeps, as expected. The load sweep was conducted at 2000 RPM over a range of NMEP from 1.1 to 14 bar. The difference in IMEP values were less 0.08% while the PMEP values were below 1% except for the AVL GH14D which was 1.8% and the AVL GH15DK which was at 1.25%. In-cylinder pressure transducer data quality was effectively analyzed using a combination of experimental data and simulation results. Several criteria can be used to investigate the impact of thermal shock on data quality as well as determine the best location and thermal protection for various transducers.