987 resultados para File Formats


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Notes on how to get from Excel, Access and Textfiles into SPSS. Used in Research Skills for Biomedical Science

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Every Argo data file submitted by a DAC for distribution on the GDAC has its format and data consistency checked by the Argo FileChecker. Two types of checks are applied: 1. Format checks. Ensures the file formats match the Argo standards precisely. 2. Data consistency checks. Additional data consistency checks are performed on a file after it passes the format checks. These checks do not duplicate any of the quality control checks performed elsewhere. These checks can be thought of as “sanity checks” to ensure that the data are consistent with each other. The data consistency checks enforce data standards and ensure that certain data values are reasonable and/or consistent with other information in the files. Examples of the “data standard” checks are the “mandatory parameters” defined for meta-data files and the technical parameter names in technical data files. Files with format or consistency errors are rejected by the GDAC and are not distributed. Less serious problems will generate warnings and the file will still be distributed on the GDAC. Reference Tables and Data Standards: Many of the consistency checks involve comparing the data to the published reference tables and data standards. These tables are documented in the User’s Manual. (The FileChecker implements “text versions” of these tables.)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sausage is a protein sequence threading program, but with remarkable run-time flexibility. Using different scripts, it can calculate protein sequence-structure alignments, search structure libraries, swap force fields, create models form alignments, convert file formats and analyse results. There are several different force fields which might be classed as knowledge-based, although they do not rely on Boltzmann statistics. Different force fields are used for alignment calculations and subsequent ranking of calculated models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: The aim of this study was to evaluate the performances of observers in diagnosing proximal caries in digital images obtained from digital bitewing radiographs using two scanners and four digital cameras in Joint Photographic Experts Group (JPEG) and tagged image file format (TIFF) files, and comparing them with the original conventional radiographs. Method: In total, 56 extracted teeth were radiographed with Kodak Insight film (Eastman Kodak, Rochester, NY) in a Kaycor Yoshida X-ray device (Kaycor X-707;Yoshida Dental Manufacturing Co., Tokyo, Japan) operating at 70 kV and 7 mA with an exposure time of 0.40 s. The radiographs were obtained and scanned by CanonScan D646U (Canon USA Inc., Newport News, VA) and Genius ColorPage HR7X (KYE Systems Corp. America, Doral, FL) scanners, and by Canon Powershot G2 (Canon USA Inc.), Canon RebelXT (Canon USA Inc.), Nikon Coolpix 8700 (Nikon Inc., Melville, NY), and Nikon D70s (Nikon Inc.) digital cameras in JPEG and TIFF formats. Three observers evaluated the images. The teeth were then observed under the microscope in polarized light for the verification of the presence and depth of the carious lesions. Results: The probability of no diagnosis ranged from 1.34% (Insight film) to 52.83% (CanonScan/JPEG). The sensitivity ranged from 0.24 (Canon RebelXT/JPEG) to 0.53 (Insight film), the specificity ranged from 0.93 (Nikon Coolpix/JPEG, Canon Powershot/TIFF, Canon RebelXT/JPEG and TIFF) to 0.97 (CanonScan/TIFF and JPEG) and the accuracy ranged from 0.82 (Canon RebelXT/JPEG) to 0.91 (CanonScan/JPEG). Conclusion: The carious lesion diagnosis did not change in either of the file formats (JPEG and TIFF) in which the images were saved for any of the equipment used. Only the CanonScan scanner did not have adequate performance in radiography digitalization for caries diagnosis and it is not recommended for this purpose. Dentomaxillofacial Radiology (2011) 40, 338-343. doi: 10.1259/dmfr/67185962

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: To evaluate the influence of JPEG quality factors 100, 80 and 60 on the reproducibility of identification of cephalometric points on images of lateral cephalograms, compared with the Digital Imaging and Communications in Medicine (DICOM) format. Methods: The sample was composed of 30 images of digital lateral cephalograms obtained from 30 individuals (15 males and 15 females) on a phosphor plate system in DICOM format. The images were converted to JPEG with quality factors 100, 80 and 60 with the aid of software, adding up to 90 images. The 120 images (DICOM, JPEG 100, 80 and 60) were blinded and 12 cephalometric points were identified on each image by three calibrated orthodontists, using the x-y coordinate system, on a cephalometric software. Results: The results revealed that identification of cephalometric points was highly reproducible, except for the point Orbitale (Or) on the x-axis. The different file formats did not present a statistically significant difference. Conclusions: JPEG images of lateral cephalograms with quality factors 100, 80 and 60 did not present alterations in the reproducibility of identification of cephalometric points compared with the DICOM format. Good reproducibility was achieved for the 12 points, except for point Or on the x-axis. Dentomaxillofacial Radiology (2009) 38, 393-400. doi: 10.1259/dmfr/40996636

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The personal computer revolution has resulted in the widespread availability of low-cost image analysis hardware. At the same time, new graphic file formats have made it possible to handle and display images at resolutions beyond the capability of the human eye. Consequently, there has been a significant research effort in recent years aimed at making use of these hardware and software technologies for flotation plant monitoring. Computer-based vision technology is now moving out of the research laboratory and into the plant to become a useful means of monitoring and controlling flotation performance at the cell level. This paper discusses the metallurgical parameters that influence surface froth appearance and examines the progress that has been made in image analysis of flotation froths. The texture spectrum and pixel tracing techniques developed at the Julius Kruttschnitt Mineral Research Centre are described in detail. The commercial implementation, JKFrothCam, is one of a number of froth image analysis systems now reaching maturity. In plants where it is installed, JKFrothCam has shown a number of performance benefits. Flotation runs more consistently, meeting product specifications while maintaining high recoveries. The system has also shown secondary benefits in that reagent costs have been significantly reduced as a result of improved flotation control. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The principal topic of this work is the application of data mining techniques, in particular of machine learning, to the discovery of knowledge in a protein database. In the first chapter a general background is presented. Namely, in section 1.1 we overview the methodology of a Data Mining project and its main algorithms. In section 1.2 an introduction to the proteins and its supporting file formats is outlined. This chapter is concluded with section 1.3 which defines that main problem we pretend to address with this work: determine if an amino acid is exposed or buried in a protein, in a discrete way (i.e.: not continuous), for five exposition levels: 2%, 10%, 20%, 25% and 30%. In the second chapter, following closely the CRISP-DM methodology, whole the process of construction the database that supported this work is presented. Namely, it is described the process of loading data from the Protein Data Bank, DSSP and SCOP. Then an initial data exploration is performed and a simple prediction model (baseline) of the relative solvent accessibility of an amino acid is introduced. It is also introduced the Data Mining Table Creator, a program developed to produce the data mining tables required for this problem. In the third chapter the results obtained are analyzed with statistical significance tests. Initially the several used classifiers (Neural Networks, C5.0, CART and Chaid) are compared and it is concluded that C5.0 is the most suitable for the problem at stake. It is also compared the influence of parameters like the amino acid information level, the amino acid window size and the SCOP class type in the accuracy of the predictive models. The fourth chapter starts with a brief revision of the literature about amino acid relative solvent accessibility. Then, we overview the main results achieved and finally discuss about possible future work. The fifth and last chapter consists of appendices. Appendix A has the schema of the database that supported this thesis. Appendix B has a set of tables with additional information. Appendix C describes the software provided in the DVD accompanying this thesis that allows the reconstruction of the present work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Given the rapid increase of species with a sequenced genome, the need to identify orthologous genes between them has emerged as a central bioinformatics task. Many different methods exist for orthology detection, which makes it difficult to decide which one to choose for a particular application. Here, we review the latest developments and issues in the orthology field, and summarize the most recent results reported at the third 'Quest for Orthologs' meeting. We focus on community efforts such as the adoption of reference proteomes, standard file formats and benchmarking. Progress in these areas is good, and they are already beneficial to both orthology consumers and providers. However, a major current issue is that the massive increase in complete proteomes poses computational challenges to many of the ortholog database providers, as most orthology inference algorithms scale at least quadratically with the number of proteomes. The Quest for Orthologs consortium is an open community with a number of working groups that join efforts to enhance various aspects of orthology analysis, such as defining standard formats and datasets, documenting community resources and benchmarking. AVAILABILITY AND IMPLEMENTATION: All such materials are available at http://questfororthologs.org. CONTACT: erik.sonnhammer@scilifelab.se or c.dessimoz@ucl.ac.uk.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The HUPO Proteomics Standards Initiative has developed several standardized data formats to facilitate data sharing in mass spectrometry (MS)-based proteomics. These allow researchers to report their complete results in a unified way. However, at present, there is no format to describe the final qualitative and quantitative results for proteomics and metabolomics experiments in a simple tabular format. Many downstream analysis use cases are only concerned with the final results of an experiment and require an easily accessible format, compatible with tools such as Microsoft Excel or R. We developed the mzTab file format for MS-based proteomics and metabolomics results to meet this need. mzTab is intended as a lightweight supplement to the existing standard XML-based file formats (mzML, mzIdentML, mzQuantML), providing a comprehensive summary, similar in concept to the supplemental material of a scientific publication. mzTab files can contain protein, peptide, and small molecule identifications together with experimental metadata and basic quantitative information. The format is not intended to store the complete experimental evidence but provides mechanisms to report results at different levels of detail. These range from a simple summary of the final results to a representation of the results including the experimental design. This format is ideally suited to make MS-based proteomics and metabolomics results available to a wider biological community outside the field of MS. Several software tools for proteomics and metabolomics have already adapted the format as an output format. The comprehensive mzTab specification document and extensive additional documentation can be found online.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Researchers working in the field of global connectivity analysis using diffusion magnetic resonance imaging (MRI) can count on a wide selection of software packages for processing their data, with methods ranging from the reconstruction of the local intra-voxel axonal structure to the estimation of the trajectories of the underlying fibre tracts. However, each package is generally task-specific and uses its own conventions and file formats. In this article we present the Connectome Mapper, a software pipeline aimed at helping researchers through the tedious process of organising, processing and analysing diffusion MRI data to perform global brain connectivity analyses. Our pipeline is written in Python and is freely available as open-source at www.cmtk.org.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The integrated system of design for manufacturing and assembly (DFMA) and internet based collaborative design are presented to support product design, manufacturing process, and assembly planning for axial eccentric oil-pump design. The presented system manages and schedules group oriented collaborative activities. The design guidelines of internet based collaborative design & DFMA are expressed. The components and the manufacturing stages of axial eccentric oil-pump are expressed in detail. The file formats of the presented system include the data types of collaborative design of the product, assembly design, assembly planning and assembly system design. Product design and assembly planning can be operated synchronously and intelligently and they are integrated under the condition of internet based collaborative design and DFMA. The technologies of collaborative modelling, collaborative manufacturing, and internet based collaborative assembly for the specific pump construction are developed. A seven-security level is presented to ensure the security of the internet based collaborative design system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Päättötyössäni tutkin, kuinka yritys voi yhdistää monimuotoista tietoa tietojärjestelmässään sekä millainen vaikutus yhdistämisellä on yrityksen tietojohtamiseen. Työni perustuu ongelman ratkaisuun. Selvitän alussa eri muotoisen tiedon yhdistämisen teoriaa ja käsitteitä sekä yritysten verkostoitumisen periaatetta, tietovirtoja, tietojohtamista, tietopääomaa. Keskeisin seikka yhdistämisessä on konvertointi. Käsittelen laajasti SGML :n suomia mahdollisuuksia, koska tavoitteena on laite- ja ohjelmistoympäristöistä riippumaton dokumentin esitysmuoto sekä rakenteen ja tulostusasun erottaminen toisistaan. Rakenteisella lähestymistavalla on useita etuja, kuten tiedon organisointi dokumentissa, tiedon uudelleenkäytön mahdollistaminen sekä dokumenttien pitkäikäisyys. Rakenteen merkkaus mahdollistaa dokumentin muuntamisen useisiin eri esitysmuotoihin. Empiirisessä osassa tutkin ja ratkaisen Televirmi Oy:lle tekemäni sovelluksen avulla, kuinka eri muodossa olevat tiedostot yhdistetään relaatiotietokantaan ja otetaan käyttöön tietojärjestelmässä. Yhdistämisen jälkeen analysoin yhdistämisen vaikutuksia yrityksen tietojohtamiseen yleisellä tasolla sekä tiedon hyödyntämisen mahdollisuuksia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

gvSIG Mobile, la versión de gvSIG para dispositivos móviles presenta su nueva versión que incluye las esperadas funcionalidades de creación de nuevas entidades geográficas y utilización de formularios personalizados para edición de datos, además de nuevos formatos de datos vectoriales (GML, KML, GPX) y sistemas de referencia. Funcionalidades que se suman a las capacidades de visor de cartografía (ECW, SHP, WMS) y sistema de localización mediante GPS que ya posee. gvSIG Mobile está siendo desarrollado por Prodevelop, la Universitat de València e Iver para la Conselleria d’Infraestructures i Transport de la Generalitat Valenciana y se distribuye con una licencia GPL