933 resultados para Software Package Data Exchange (SPDX)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’elaborato descrive la progettazione di un prototipo per misure spettroscopiche di desorbimento termico e i primi esperimenti effettuati con esso. Vengono descritti in dettaglio gli strumenti peculiari di tale apparato, come lo spettrometro di massa quadrupolo e la pompa a diffusione e le parti costruite ad hoc per tale dispositivo, ovvero la struttura del portacampione e del sostegno al forno utilizzato per il riscaldamento delle sostanze analizza- te. Particolare importanza `e posta nella descrizione della parte software del prototipo, che utilizza la tecnologia del DDE (Dynamic Data Exchange) per comunicare i dati tra due programmi diversi operanti su una medesima piattaforma; viene quindi illustrato il funzionamento del software comunicante direttamente con lo spettrometro e del programma LabView creato per il monitoraggio e il salvataggio dei dati raccolti da tale apparato. L’ultima parte dell’elaborato riguarda i primi esperimenti di spettroscopia di desorbimento termico effettuati, comprendendo sia quelli preliminari per testare la qualità del prototipo sia quelli da cui `e possibile ottenere una curva di desorbimento termico per i vari gas analizzati in camera, come ad esempio idrogeno.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Redshift Space Distortions (RSD) are an apparent anisotropy in the distribution of galaxies due to their peculiar motion. These features are imprinted in the correlation function of galaxies, which describes how these structures distribute around each other. RSD can be represented by a distortions parameter $\beta$, which is strictly related to the growth of cosmic structures. For this reason, measurements of RSD can be exploited to give constraints on the cosmological parameters, such us for example the neutrino mass. Neutrinos are neutral subatomic particles that come with three flavours, the electron, the muon and the tau neutrino. Their mass differences can be measured in the oscillation experiments. Information on the absolute scale of neutrino mass can come from cosmology, since neutrinos leave a characteristic imprint on the large scale structure of the universe. The aim of this thesis is to provide constraints on the accuracy with which neutrino mass can be estimated when expoiting measurements of RSD. In particular we want to describe how the error on the neutrino mass estimate depends on three fundamental parameters of a galaxy redshift survey: the density of the catalogue, the bias of the sample considered and the volume observed. In doing this we make use of the BASICC Simulation from which we extract a series of dark matter halo catalogues, characterized by different value of bias, density and volume. This mock data are analysed via a Markov Chain Monte Carlo procedure, in order to estimate the neutrino mass fraction, using the software package CosmoMC, which has been conveniently modified. In this way we are able to extract a fitting formula describing our measurements, which can be used to forecast the precision reachable in future surveys like Euclid, using this kind of observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moderne ESI-LC-MS/MS-Techniken erlauben in Verbindung mit Bottom-up-Ansätzen eine qualitative und quantitative Charakterisierung mehrerer tausend Proteine in einem einzigen Experiment. Für die labelfreie Proteinquantifizierung eignen sich besonders datenunabhängige Akquisitionsmethoden wie MSE und die IMS-Varianten HDMSE und UDMSE. Durch ihre hohe Komplexität stellen die so erfassten Daten besondere Anforderungen an die Analysesoftware. Eine quantitative Analyse der MSE/HDMSE/UDMSE-Daten blieb bislang wenigen kommerziellen Lösungen vorbehalten. rn| In der vorliegenden Arbeit wurden eine Strategie und eine Reihe neuer Methoden zur messungsübergreifenden, quantitativen Analyse labelfreier MSE/HDMSE/UDMSE-Daten entwickelt und als Software ISOQuant implementiert. Für die ersten Schritte der Datenanalyse (Featuredetektion, Peptid- und Proteinidentifikation) wird die kommerzielle Software PLGS verwendet. Anschließend werden die unabhängigen PLGS-Ergebnisse aller Messungen eines Experiments in einer relationalen Datenbank zusammengeführt und mit Hilfe der dedizierten Algorithmen (Retentionszeitalignment, Feature-Clustering, multidimensionale Normalisierung der Intensitäten, mehrstufige Datenfilterung, Proteininferenz, Umverteilung der Intensitäten geteilter Peptide, Proteinquantifizierung) überarbeitet. Durch diese Nachbearbeitung wird die Reproduzierbarkeit der qualitativen und quantitativen Ergebnisse signifikant gesteigert.rn| Um die Performance der quantitativen Datenanalyse zu evaluieren und mit anderen Lösungen zu vergleichen, wurde ein Satz von exakt definierten Hybridproteom-Proben entwickelt. Die Proben wurden mit den Methoden MSE und UDMSE erfasst, mit Progenesis QIP, synapter und ISOQuant analysiert und verglichen. Im Gegensatz zu synapter und Progenesis QIP konnte ISOQuant sowohl eine hohe Reproduzierbarkeit der Proteinidentifikation als auch eine hohe Präzision und Richtigkeit der Proteinquantifizierung erreichen.rn| Schlussfolgernd ermöglichen die vorgestellten Algorithmen und der Analyseworkflow zuverlässige und reproduzierbare quantitative Datenanalysen. Mit der Software ISOQuant wurde ein einfaches und effizientes Werkzeug für routinemäßige Hochdurchsatzanalysen labelfreier MSE/HDMSE/UDMSE-Daten entwickelt. Mit den Hybridproteom-Proben und den Bewertungsmetriken wurde ein umfassendes System zur Evaluierung quantitativer Akquisitions- und Datenanalysesysteme vorgestellt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Negli ultimi 20 anni il progresso tecnologico ha segnato un profondo cambiamento in svariati ambiti tra i quali quello della Sanità in cui hanno preso vita apparecchiature diagnostiche, cosiddette “digitali native”, come la Tomografia Computerizzata (TC), la Tomografia ad Emissione di Positroni (PET), la Risonanza Magnetica Nucleare (RMN), l’Ecografia. A differenza delle diagnostiche tradizionali, come ad esempio la Radiologia convenzionale, che forniscono come risultato di un esame un’immagine bidimensionale ricavata dalla semplice proiezione di una struttura anatomica indagata, questi nuovi sistemi sono in grado di generare scansioni tomografiche. Disporre di immagini digitali contenenti dati tridimensionali rappresenta un enorme passo in avanti per l’indagine diagnostica, ma per poterne estrapolare e sfruttare i preziosi contenuti informativi occorrono i giusti strumenti che, data la natura delle acquisizioni, vanno ricercati nel mondo dell’Informatica. A tal proposito il seguente elaborato si propone di presentare un software package per la visualizzazione, l’analisi e l’elaborazione di medical images chiamato 3D Slicer che rappresenta un potente strumento di cui potersi avvalere in differenti contesti medici. Nel primo capitolo verrà proposta un’introduzione al programma; Seguirà il secondo capitolo con una trattazione più tecnica in cui verranno approfondite alcune funzionalità basilari del software e altre più specifiche; Infine nel terzo capitolo verrà preso in esame un intervento di endoprotesica vascolare e come grazie al supporto di innovativi sistemi di navigazione chirurgica sia possibile avvalersi di 3D Slicer anche in ambiente intraoperatorio

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is aimed to assess similarities and mismatches between the outputs from two independent methods for the cloud cover quantification and classification based on quite different physical basis. One of them is the SAFNWC software package designed to process radiance data acquired by the SEVIRI sensor in the VIS/IR. The other is the MWCC algorithm, which uses the brightness temperatures acquired by the AMSU-B and MHS sensors in their channels centered in the MW water vapour absorption band. At a first stage their cloud detection capability has been tested, by comparing the Cloud Masks they produced. These showed a good agreement between two methods, although some critical situations stand out. The MWCC, in effect, fails to reveal clouds which according to SAFNWC are fractional, cirrus, very low and high opaque clouds. In the second stage of the inter-comparison the pixels classified as cloudy according to both softwares have been. The overall observed tendency of the MWCC method, is an overestimation of the lower cloud classes. Viceversa, the more the cloud top height grows up, the more the MWCC not reveal a certain cloud portion, rather detected by means of the SAFNWC tool. This is what also emerges from a series of tests carried out by using the cloud top height information in order to evaluate the height ranges in which each MWCC category is defined. Therefore, although the involved methods intend to provide the same kind of information, in reality they return quite different details on the same atmospheric column. The SAFNWC retrieval being very sensitive to the top temperature of a cloud, brings the actual level reached by this. The MWCC, by exploiting the capability of the microwaves, is able to give an information about the levels that are located more deeply within the atmospheric column.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To determine the reproducibility and validity of video screen measurement (VSM) of sagittal plane joint angles during gait. METHODS: 17 children with spastic cerebral palsy walked on a 10m walkway. Videos were recorded and 3d-instrumented gait analysis was performed. Two investigators measured six sagittal joint/segment angles (shank, ankle, knee, hip, pelvis, and trunk) using a custom-made software package. The intra- and interrater reproducibility were expressed by the intraclass correlation coefficient (ICC), standard error of measurements (SEM) and smallest detectable difference (SDD). The agreement between VSM and 3d joint angles was illustrated by Bland-Altman plots and limits of agreement (LoA). RESULTS: Regarding the intrarater reproducibility of VSM, the ICC ranged from 0.99 (shank) to 0.58 (trunk), the SEM from 0.81 degrees (shank) to 5.97 degrees (trunk) and the SDD from 1.80 degrees (shank) to 16.55 degrees (trunk). Regarding the interrater reproducibility, the ICC ranged from 0.99 (shank) to 0.48 (trunk), the SEM from 0.70 degrees (shank) to 6.78 degrees (trunk) and the SDD from 1.95 degrees (shank) to 18.8 degrees (trunk). The LoA between VSM and 3d data ranged from 0.4+/-13.4 degrees (knee extension stance) to 12.0+/-14.6 degrees (ankle dorsiflexion swing). CONCLUSION: When performed by the same observer, VSM mostly allows the detection of relevant changes after an intervention. However, VSM angles differ from 3d-IGA and do not reflect the real sagittal joint position, probably due to the additional movements in the other planes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several practical obstacles in data handling and evaluation complicate the use of quantitative localized magnetic resonance spectroscopy (qMRS) in clinical routine MR examinations. To overcome these obstacles, a clinically feasible MR pulse sequence protocol based on standard available MR pulse sequences for qMRS has been implemented along with newly added functionalities to the free software package jMRUI-v5.0 to make qMRS attractive for clinical routine. This enables (a) easy and fast DICOM data transfer from the MR console and the qMRS-computer, (b) visualization of combined MR spectroscopy and imaging, (c) creation and network transfer of spectroscopy reports in DICOM format, (d) integration of advanced water reference models for absolute quantification, and (e) setup of databases containing normal metabolite concentrations of healthy subjects. To demonstrate the work-flow of qMRS using these implementations, databases for normal metabolite concentration in different regions of brain tissue were created using spectroscopic data acquired in 55 normal subjects (age range 6-61 years) using 1.5T and 3T MR systems, and illustrated in one clinical case of typical brain tumor (primitive neuroectodermal tumor). The MR pulse sequence protocol and newly implemented software functionalities facilitate the incorporation of qMRS and reference to normal value metabolite concentration data in daily clinical routine. Magn Reson Med, 2013. © 2012 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. We detail some of the design decisions, software paradigms and operational strategies that have allowed a small number of researchers to provide a wide variety of innovative, extensible, software solutions in a relatively short time. The use of an object oriented programming paradigm, the adoption and development of a software package system, designing by contract, distributed development and collaboration with other projects are elements of this project's success. Individually, each of these concepts are useful and important but when combined they have provided a strong basis for rapid development and deployment of innovative and flexible research software for scientific computation. A primary objective of this initiative is achievement of total remote reproducibility of novel algorithmic research results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Making an accurate diagnosis is essential to ensure that a patient receives appropriate treatment and correct information regarding their prognosis. Characteristics of diagnostic tests are quantified in test accuracy studies, but many such studies have methodological flaws. The HSRC evidence-based diagnosis programme has focused on methods for systematic reviews of test accuracy studies, and the wider context in which tests are ordered and interpreted. We carried out a range of projects relating to literature searching, quality assessment, meta-analysis, presentation of results, and interactions between doctors and patients during the diagnostic process. We have shown that systematic reviews of test accuracy studies should search a range of databases and that current diagnostic filters do not have sufficient accuracy to be used in test accuracy reviews. Summary quality scores should not be used in test accuracy reviews; the Quality Assessment of Studies of Diagnostic Accuracy included in Systematic Reviews (QUADAS) tool for assessing test accuracy studies is acceptable for quality assessment. We have shown that the hierarchical summary receiver operating characteristic (HSROC) and bivariate models for meta-analysis of test accuracy are statistically equivalent in many circumstances, and have developed an add-on module for the statistical software package Stata that enables these statistically rigorous models to be fitted by those without expert statistical knowledge. Three areas that would benefit from further research are literature searching, synthesis of results from individual patient data and presentation of results.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

HYPOTHESIS A previously developed image-guided robot system can safely drill a tunnel from the lateral mastoid surface, through the facial recess, to the middle ear, as a viable alternative to conventional mastoidectomy for cochlear electrode insertion. BACKGROUND Direct cochlear access (DCA) provides a minimally invasive tunnel from the lateral surface of the mastoid through the facial recess to the middle ear for cochlear electrode insertion. A safe and effective tunnel drilled through the narrow facial recess requires a highly accurate image-guided surgical system. Previous attempts have relied on patient-specific templates and robotic systems to guide drilling tools. In this study, we report on improvements made to an image-guided surgical robot system developed specifically for this purpose and the resulting accuracy achieved in vitro. MATERIALS AND METHODS The proposed image-guided robotic DCA procedure was carried out bilaterally on 4 whole head cadaver specimens. Specimens were implanted with titanium fiducial markers and imaged with cone-beam CT. A preoperative plan was created using a custom software package wherein relevant anatomical structures of the facial recess were segmented, and a drill trajectory targeting the round window was defined. Patient-to-image registration was performed with the custom robot system to reference the preoperative plan, and the DCA tunnel was drilled in 3 stages with progressively longer drill bits. The position of the drilled tunnel was defined as a line fitted to a point cloud of the segmented tunnel using principle component analysis (PCA function in MatLab). The accuracy of the DCA was then assessed by coregistering preoperative and postoperative image data and measuring the deviation of the drilled tunnel from the plan. The final step of electrode insertion was also performed through the DCA tunnel after manual removal of the promontory through the external auditory canal. RESULTS Drilling error was defined as the lateral deviation of the tool in the plane perpendicular to the drill axis (excluding depth error). Errors of 0.08 ± 0.05 mm and 0.15 ± 0.08 mm were measured on the lateral mastoid surface and at the target on the round window, respectively (n =8). Full electrode insertion was possible for 7 cases. In 1 case, the electrode was partially inserted with 1 contact pair external to the cochlea. CONCLUSION The purpose-built robot system was able to perform a safe and reliable DCA for cochlear implantation. The workflow implemented in this study mimics the envisioned clinical procedure showing the feasibility of future clinical implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A management information system (MIS) provides a means for collecting, reporting, and analyzing data from all segments of an organization. Such systems are common in business but rare in libraries. The Houston Academy of Medicine-Texas Medical Center Library developed an MIS that operates on a system of networked IBM PCs and Paradox, a commercial database software package. The data collected in the system include monthly reports, client profile information, and data collected at the time of service requests. The MIS assists with enforcement of library policies, ensures that correct information is recorded, and provides reports for library managers. It also can be used to help answer a variety of ad hoc questions. Future plans call for the development of an MIS that could be adapted to other libraries' needs, and a decision-support interface that would facilitate access to the data contained in the MIS databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stata is a general purpose software package that has become popular among various disciplines such as epidemiology, economics, or social sciences. Users like Stata for its scientific approach, its robustness and reliability, and the ease with which its functionality can be extended by user written programs. In this talk I will first give a brief overview of the functionality of Stata and then discuss two specific features: survey estimation and predictive margins/marginal effects. Most surveys are based on complex samples that contain multiple sampling stages, are stratified or clustered, and feature unequal selection probabilities. Standard estimators can produce misleading results in such samples unless the peculiarities of the sampling plan are taken into account. Stata offers survey statistics for complex samples for a wide variety of estimators and supports several variance estimation procedures such as linearization, jackknife, and balanced repeated replication (see Kreuter and Valliant, 2007, Stata Journal 7: 1-21). In the talk I will illustrate these features using applied examples and I will also show how user written commands can be adapted to support complex samples. Complex can also be the models we fit to our data, making it difficult to interpret them, especially in case of nonlinear or non-additive models (Mood, 2010, European Sociological Review 26: 67-82). Stata provides a number of highly useful commands to make results of such models accessible by computing and displaying predictive margins and marginal effects. In my talk I will discuss these commands provide various examples demonstrating their use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge processes are critical to outsourced software projects. According to outsourcing research, outsourced software projects succeed if they manage to integrate the client’s business knowledge and the vendor’s technical knowledge. In this paper, we submit that this view may not be wrong, but incomplete in a significant part of outsourced software work, which is software maintenance. Data from six software-maintenance outsourcing transitions indicate that more important than business or technical knowledge can be application knowledge, which vendor engineers acquire over time during practice. Application knowledge was the dominant knowledge during knowledge transfer activities and its acquisition enabled vendor staff to solve maintenance tasks. We discuss implications for widespread assumptions in outsourcing research.