997 resultados para file format description
Resumo:
Objective: The aim of this study was to evaluate the performances of observers in diagnosing proximal caries in digital images obtained from digital bitewing radiographs using two scanners and four digital cameras in Joint Photographic Experts Group (JPEG) and tagged image file format (TIFF) files, and comparing them with the original conventional radiographs. Method: In total, 56 extracted teeth were radiographed with Kodak Insight film (Eastman Kodak, Rochester, NY) in a Kaycor Yoshida X-ray device (Kaycor X-707;Yoshida Dental Manufacturing Co., Tokyo, Japan) operating at 70 kV and 7 mA with an exposure time of 0.40 s. The radiographs were obtained and scanned by CanonScan D646U (Canon USA Inc., Newport News, VA) and Genius ColorPage HR7X (KYE Systems Corp. America, Doral, FL) scanners, and by Canon Powershot G2 (Canon USA Inc.), Canon RebelXT (Canon USA Inc.), Nikon Coolpix 8700 (Nikon Inc., Melville, NY), and Nikon D70s (Nikon Inc.) digital cameras in JPEG and TIFF formats. Three observers evaluated the images. The teeth were then observed under the microscope in polarized light for the verification of the presence and depth of the carious lesions. Results: The probability of no diagnosis ranged from 1.34% (Insight film) to 52.83% (CanonScan/JPEG). The sensitivity ranged from 0.24 (Canon RebelXT/JPEG) to 0.53 (Insight film), the specificity ranged from 0.93 (Nikon Coolpix/JPEG, Canon Powershot/TIFF, Canon RebelXT/JPEG and TIFF) to 0.97 (CanonScan/TIFF and JPEG) and the accuracy ranged from 0.82 (Canon RebelXT/JPEG) to 0.91 (CanonScan/JPEG). Conclusion: The carious lesion diagnosis did not change in either of the file formats (JPEG and TIFF) in which the images were saved for any of the equipment used. Only the CanonScan scanner did not have adequate performance in radiography digitalization for caries diagnosis and it is not recommended for this purpose. Dentomaxillofacial Radiology (2011) 40, 338-343. doi: 10.1259/dmfr/67185962
Resumo:
Roughly fifteen years ago, the Church of Jesus Christ of Latter-day Saints published a new proposed standard file format. They call it GEDCOM. It was designed to allow different genealogy programs to exchange data.Five years later, in may 2000, appeared the GENTECH Data Modeling Project, with the support of the Federation of Genealogical Societies (FGS) and other American genealogical societies. They attempted to define a genealogical logic data model to facilitate data exchange between different genealogical programs. Although genealogists deal with an enormous variety of data sources, one of the central concepts of this data model was that all genealogical data could be broken down into a series of short, formal genealogical statements. It was something more versatile than only export/import data records on a predefined fields. This project was finally absorbed in 2004 by the National Genealogical Society (NGS).Despite being a genealogical reference in many applications, these models have serious drawbacks to adapt to different cultural and social environments. At the present time we have no formal proposal for a recognized standard to represent the family domain.Here we propose an alternative conceptual model, largely inherited from aforementioned models. The design is intended to overcome their limitations. However, its major innovation lies in applying the ontological paradigm when modeling statements and entities.
Resumo:
The GO annotation dataset provided by the UniProt Consortium (GOA: http://www.ebi.ac.uk/GOA) is a comprehensive set of evidenced-based associations between terms from the Gene Ontology resource and UniProtKB proteins. Currently supplying over 100 million annotations to 11 million proteins in more than 360,000 taxa, this resource has increased 2-fold over the last 2 years and has benefited from a wealth of checks to improve annotation correctness and consistency as well as now supplying a greater information content enabled by GO Consortium annotation format developments. Detailed, manual GO annotations obtained from the curation of peer-reviewed papers are directly contributed by all UniProt curators and supplemented with manual and electronic annotations from 36 model organism and domain-focused scientific resources. The inclusion of high-quality, automatic annotation predictions ensures the UniProt GO annotation dataset supplies functional information to a wide range of proteins, including those from poorly characterized, non-model organism species. UniProt GO annotations are freely available in a range of formats accessible by both file downloads and web-based views. In addition, the introduction of a new, normalized file format in 2010 has made for easier handling of the complete UniProt-GOA data set.
Resumo:
Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
Työn tarkoituksena oli hakea mittausjärjestelmän raja-arvoja optiselle kamerapohjaiselle roskalaskentajärjestelmälle sekä testata roskalaskentajärjestelmän toimivuus käytännössä. Tavoitteena oli tuotteistaa kamerapohjainen roskalaskenta-analyysi palvelutuotteeksi, jota voitaisiin hyödyntää sihtien kuntokartoituksessa ja ongelmanratkaisuvälineenä. Teoriaosa koostui kahdesta kokonaisuudesta: sulpun epäpuhtauksista, roskalaskennan teoriasta ja epäpuhtauksien mittausmenetelmistä sekä markkinoinnista, tuotteistamis- ja lanseerausprosessista palvelutuotteen näkökulmasta. Kokeellisessa osassa selvitettiin kamerapohjaiseen roskalaskentaanalyysiin vaikuttavia tekijöitä: kameran tarkennus, kuvan terävyys, analysoitavan arkin väri, neliömassa ja roskapitoisuus, impregnointi, valonlähde, kuvan muokkaus, tiedostomuoto ja pikselimäärä. Kamerapohjaisen roskalaskenta-analyysin soveltuvuus käytäntöön testattiin tehdasesimerkin avulla. Havaittiin, että kamerapohjaista roskalaskenta-analyysiä voitaisiin käyttää lähes kaikille massatyypeille. Työssä määriteltiin kalibrointimenetelmä kameran tarkentamiseksi arkin tasoon sekä suljinnopeusanalyysi massatyypistä riippuvan suljinnopeuden selvitykseen. Kamerapohjaisessa roskalaskenta-analyysissä määritettiin käytettäväksi arkin neliömassana 60 g/m2, suljinaukkoa F5 ja terävyysasetusta 5. Tulokseksi saatiin, että analysoitavia arkkeja ei tarvitse impregnoida tai jälkikäsitellä. Korrelaatiota Somerville-erotustehokkuuteen ei löytynyt. Esimerkkitehtaasta selvitettiin primääriportaan roskapitoisuudet ja erotustehokkuudet. Tehdasesimerkin tulosten perusteella havaittiin happivaiheen ja D0-vaiheen olleen tehokkaimpia epäpuhtauksien poistajia.
Resumo:
Työn tarkoitus on suunnitella ja toteuttaa nettipohjainen voimalaitosratkaisujen hinta-arviojärjestelmä Savonia Power Oy:n käyttöön. Järjestelmän tarkoitus on automatisoida voimalaitosratkaisujen tunnuslukujen laskeminen asiakkaan antamien alkuarvojen pohjalta ja tallentaa mahdollinen yhteydenottopyyntö. Järjestelmän vaatimuksina ovat laskentakaavojen helppo päivitettävyys, kaavojen automaattinen hakeminen Excel 2007–muotoisesta tiedostosta ja asiakasrajapinnan nettipohjaisuus. Työ jakaantuu kahteen osaan. Teoriaosassa selvitetään työssä käytettyjen tekniikoiden taustaa ja selvitetään Microsoftin OOXML-tiedostomuodon rakenne työssä vaadittavin osin. Käytännön osassa suunnitellaan ja osin toteutetaan valmis järjestelmä käyttäen PHP-kieltä, XML-määrittelykieltä ja MySQL-tietokantaa. Suurimmat haasteet järjestelmän toteutuksessa ovat laskentakaavojen parsiminen Excel-tiedostosta ilman sen sisällön tiukkaa rajoittamista tiettyihin raameihin ja järjestelmän helppo päivitys saaduilla laskentakaavoilla. Työn lopputuloksena on toimiva, muttei viimeistelty järjestelmä sekä tämä dokumentti. Työn suurin merkitys tulee olemaan edellä mainittujen suunnitteluhaasteiden selvittäminen, sekä valmis ohjelmarunko yleiseen käyttöön otetulle järjestelmälle.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Sometimes you may need to scan in photographs, books or magazines. Scanning is the easy part but making sure your settings are right is the important part. Scan at 300dpi to the size you need to print A4 scanner but you need A3 print - no problem scan at 600dpi Always scan as a TIFF file format as this will give you a non compressed source to work from.
Resumo:
This PowerPoint outlines the main points that you need to consider when adding figures to your thesis, including resolution, file format and copyright.
Resumo:
Train dispatchers faces lots of challenges due to conflicts which causes delays of trains as a result of solving possible dispatching problems the network faces. The major challenge is for the train dispatchers to make the right decision and have reliable, cost effective and much more faster approaches needed to solve dispatching problems. This thesis work provides detail information on the implementation of different heuristic algorithms for train dispatchers in solving train dispatching problems. The library data files used are in xml file format and deals with both single and double tracks between main stations. The main objective of this work is to build different heuristic algorithms to solve unexpected delays faced by train dispatchers and to help in making right decisions on steps to take to have reliable and cost effective solution to the problems. These heuristics algorithms proposed were able to help dispatchers in making right decisions when solving train dispatching problems.
Resumo:
Aims: This study compared fractal dimension (FD) values on mandibular trabecular bone in digital and digitized images at different spatial and contrast resolutions. Materials and Methods: 12 radiographs of dried human mandibles were obtained using custom-fabricated hybrid image receptors composed of a periapical radiographic film and a photostimulable phosphor plate (PSP). The film/ PSP sets were disassembled, and the PSPs produced images with 600 dots per inch (dpi) and 16 bits. These images were exported as tagged image file format (TIFF), 16 and 8 bits, and 600, 300 and 150 dpi. The films were processed and digitized 3 times on a flatbed scanner, producing TIFF images with 600, 300 and 150 dpi, and 8 bits. On each image, a circular region of interest was selected on the trabecular alveolar bone, away from root apices and FD was calculated by tile counting method. Two-way ANOVA and Tukey’s test were conducted to compare the mean values of FD, according to image type and spatial resolution (α = 5%). Results: Spatial resolution was directly and inversely proportional to FD mean values and standard deviation, respectively. Spatial resolution of 150 dpi yielded significant lower mean values of FD than the resolutions of 600 and 300 dpi ( P < 0.05). A nonsignificant variability was observed for the image types ( P > 0.05). The interaction between type of image and level of spatial resolution was not signi fi cant (P > 0.05). Conclusion: Under the tested, conditions, FD values of the mandibular trabecular bone assessed either by digital or digitized images did not change. Furthermore, these values were in fluenced by lower spatial resolution but not by contrast resolution.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
An Integrated Transmission-Media Noise Calibration Software For Deep-Space Radio Science Experiments
Resumo:
The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.
Resumo:
This paper describes the RNetCDF package (version 1.6), an interface for reading and writing files in Unidata NetCDF format, and gives an introduction to the NetCDF file format. NetCDF is a machine independent binary file format which allows storage of different types of array based data, along with short metadata descriptions. The package presented here allows access to the most important functions of the NetCDF C-interface for reading, writing, and modifying NetCDF datasets. In this paper, we present a short overview on the NetCDF file format and show usage examples of the package.