54 resultados para Automatic annotation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: Before a patient can be connected to a mechanical ventilator, the controls of the apparatus need to be set up appropriately. Today, this is done by the intensive care professional. With the advent of closed loop controlled mechanical ventilation, methods will be needed to select appropriate start up settings automatically. The objective of our study was to test such a computerized method which could eventually be used as a start-up procedure (first 5-10 minutes of ventilation) for closed-loop controlled ventilation. DESIGN: Prospective Study. SETTINGS: ICU's in two adult and one children's hospital. PATIENTS: 25 critically ill adult patients (age > or = 15 y) and 17 critically ill children selected at random were studied. INTERVENTIONS: To stimulate 'initial connection', the patients were disconnected from their ventilator and transiently connected to a modified Hamilton AMADEUS ventilator for maximally one minute. During that time they were ventilated with a fixed and standardized breath pattern (Test Breaths) based on pressure controlled synchronized intermittent mandatory ventilation (PCSIMV). MEASUREMENTS AND MAIN RESULTS: Measurements of airway flow, airway pressure and instantaneous CO2 concentration using a mainstream CO2 analyzer were made at the mouth during application of the Test-Breaths. Test-Breaths were analyzed in terms of tidal volume, expiratory time constant and series dead space. Using this data an initial ventilation pattern consisting of respiratory frequency and tidal volume was calculated. This ventilation pattern was compared to the one measured prior to the onset of the study using a two-tailed paired t-test. Additionally, it was compared to a conventional method for setting up ventilators. The computer-proposed ventilation pattern did not differ significantly from the actual pattern (p > 0.05), while the conventional method did. However the scatter was large and in 6 cases deviations in the minute ventilation of more than 50% were observed. CONCLUSIONS: The analysis of standardized Test Breaths allows automatic determination of an initial ventilation pattern for intubated ICU patients. While this pattern does not seem to be superior to the one chosen by the conventional method, it is derived fully automatically and without need for manual patient data entry such as weight or height. This makes the method potentially useful as a start up procedure for closed-loop controlled ventilation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Arising from either retrotransposition or genomic duplication of functional genes, pseudogenes are "genomic fossils" valuable for exploring the dynamics and evolution of genes and genomes. Pseudogene identification is an important problem in computational genomics, and is also critical for obtaining an accurate picture of a genome's structure and function. However, no consensus computational scheme for defining and detecting pseudogenes has been developed thus far. As part of the ENCyclopedia Of DNA Elements (ENCODE) project, we have compared several distinct pseudogene annotation strategies and found that different approaches and parameters often resulted in rather distinct sets of pseudogenes. We subsequently developed a consensus approach for annotating pseudogenes (derived from protein coding genes) in the ENCODE regions, resulting in 201 pseudogenes, two-thirds of which originated from retrotransposition. A survey of orthologs for these pseudogenes in 28 vertebrate genomes showed that a significant fraction ( approximately 80%) of the processed pseudogenes are primate-specific sequences, highlighting the increasing retrotransposition activity in primates. Analysis of sequence conservation and variation also demonstrated that most pseudogenes evolve neutrally, and processed pseudogenes appear to have lost their coding potential immediately or soon after their emergence. In order to explore the functional implication of pseudogene prevalence, we have extensively examined the transcriptional activity of the ENCODE pseudogenes. We performed systematic series of pseudogene-specific RACE analyses. These, together with complementary evidence derived from tiling microarrays and high throughput sequencing, demonstrated that at least a fifth of the 201 pseudogenes are transcribed in one or more cell lines or tissues.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose to evaluate automatic three-dimensional gray-value rigid registration (RR) methods for prostate localization on cone-beam computed tomography (CBCT) scans. In total, 103 CBCT scans of 9 prostate patients have been analyzed. Each one was registered to the planning CT scan using different methods: (a) global RR, (b) pelvis bone structure RR, (c) bone RR refined by local soft-tissue RR using the CT clinical target volume (CTV) expanded with a 1, 3, 5, 8, 10, 12, 15 or 20-mm margin. To evaluate results, a radiation oncologist was asked to manually delineate the CTV on the CBCT scans. The Dice coefficients between each automatic CBCT segmentation - derived from the transformation of the manual CT segmentation - and the manual CBCT segmentation were calculated. Global or bone CT/CBCT RR has been shown to yield insufficient results in average. Local RR with an 8-mm margin around the CTV after bone RR was found to be the best candidate for systematically significantly improving prostate localization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The annotation of protein post-translational modifications (PTMs) is an important task of UniProtKB curators and, with continuing improvements in experimental methodology, an ever greater number of articles are being published on this topic. To help curators cope with this growing body of information we have developed a system which extracts information from the scientific literature for the most frequently annotated PTMs in UniProtKB. RESULTS: The procedure uses a pattern-matching and rule-based approach to extract sentences with information on the type and site of modification. A ranked list of protein candidates for the modification is also provided. For PTM extraction, precision varies from 57% to 94%, and recall from 75% to 95%, according to the type of modification. The procedure was used to track new publications on PTMs and to recover potential supporting evidence for phosphorylation sites annotated based on the results of large scale proteomics experiments. CONCLUSIONS: The information retrieval and extraction method we have developed in this study forms the basis of a simple tool for the manual curation of protein post-translational modifications in UniProtKB/Swiss-Prot. Our work demonstrates that even simple text-mining tools can be effectively adapted for database curation tasks, providing that a thorough understanding of the working process and requirements are first obtained. This system can be accessed at http://eagl.unige.ch/PTM/.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to assess the spatial resolution of a computed tomography (CT) scanner with an automatic approach developed for routine quality controls when varying CT parameters. The methods available to assess the modulation transfer functions (MTF) with the automatic approach were Droege's and the bead point source (BPS) methods. These MTFs were compared with presampled ones obtained using Boone's method. The results show that Droege's method is not accurate in the low-frequency range, whereas the BPS method is highly sensitive to image noise. While both methods are well adapted to routine stability controls, it was shown that they are not able to provide absolute measurements. On the other hand, Boone's method, which is robust with respect to aliasing, more resilient to noise and provides absolute measurements, satisfies the commissioning requirements perfectly. Thus, Boone's method combined with a modified Catphan 600 phantom could be a good solution to assess CT spatial resolution in the different CT planes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

MRI has evolved into an important diagnostic technique in medical imaging. However, reliability of the derived diagnosis can be degraded by artifacts, which challenge both radiologists and automatic computer-aided diagnosis. This work proposes a fully-automatic method for measuring image quality of three-dimensional (3D) structural MRI. Quality measures are derived by analyzing the air background of magnitude images and are capable of detecting image degradation from several sources, including bulk motion, residual magnetization from incomplete spoiling, blurring, and ghosting. The method has been validated on 749 3D T(1)-weighted 1.5T and 3T head scans acquired at 36 Alzheimer's Disease Neuroimaging Initiative (ADNI) study sites operating with various software and hardware combinations. Results are compared against qualitative grades assigned by the ADNI quality control center (taken as the reference standard). The derived quality indices are independent of the MRI system used and agree with the reference standard quality ratings with high sensitivity and specificity (>85%). The proposed procedures for quality assessment could be of great value for both research and routine clinical imaging. It could greatly improve workflow through its ability to rule out the need for a repeat scan while the patient is still in the magnet bore.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper deals with the development and application of the methodology for automatic mapping of pollution/contamination data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve this problem. The automatic tuning of isotropic and an anisotropic GRNN model using cross-validation procedure is presented. Results are compared with k-nearest-neighbours interpolation algorithm using independent validation data set. Quality of mapping is controlled by the analysis of raw data and the residuals using variography. Maps of probabilities of exceeding a given decision level and ?thick? isoline visualization of the uncertainties are presented as examples of decision-oriented mapping. Real case study is based on mapping of radioactively contaminated territories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The GENCODE consortium was formed to identify and map all protein-coding genes within the ENCODE regions. This was achieved by a combination of initial manual annotation by the HAVANA team, experimental validation by the GENCODE consortium and a refinement of the annotation based on these experimental results. RESULTS: The GENCODE gene features are divided into eight different categories of which only the first two (known and novel coding sequence) are confidently predicted to be protein-coding genes. 5' rapid amplification of cDNA ends (RACE) and RT-PCR were used to experimentally verify the initial annotation. Of the 420 coding loci tested, 229 RACE products have been sequenced. They supported 5' extensions of 30 loci and new splice variants in 50 loci. In addition, 46 loci without evidence for a coding sequence were validated, consisting of 31 novel and 15 putative transcripts. We assessed the comprehensiveness of the GENCODE annotation by attempting to validate all the predicted exon boundaries outside the GENCODE annotation. Out of 1,215 tested in a subset of the ENCODE regions, 14 novel exon pairs were validated, only two of them in intergenic regions. CONCLUSION: In total, 487 loci, of which 434 are coding, have been annotated as part of the GENCODE reference set available from the UCSC browser. Comparison of GENCODE annotation with RefSeq and ENSEMBL show only 40% of GENCODE exons are contained within the two sets, which is a reflection of the high number of alternative splice forms with unique exons annotated. Over 50% of coding loci have been experimentally verified by 5' RACE for EGASP and the GENCODE collaboration is continuing to refine its annotation of 1% human genome with the aid of experimental validation.