993 resultados para Combining method
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Publicado em "AIP Conference Proceedings", Vol. 1648
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
Resumo:
OBJECTIVES: Family studies typically use multiple sources of information on each individual including direct interviews and family history information. The aims of the present study were to: (1) assess agreement for diagnoses of specific substance use disorders between direct interviews and the family history method; (2) compare prevalence estimates according to the two methods; (3) test strategies to approximate prevalence estimates according to family history reports to those based on direct interviews; (4) determine covariates of inter-informant agreement; and (5) identify covariates that affect the likelihood of reporting disorders by informants. METHODS: Analyses were based on family study data which included 1621 distinct informant (first-degree relatives and spouses) - index subject pairs. RESULTS: Our main findings were: (1) inter-informant agreement was fair to good for all substance disorders, except for alcohol abuse; (2) the family history method underestimated the prevalence of drug but not alcohol use disorders; (3) lowering diagnostic thresholds for drug disorders and combining multiple family histories increased the accuracy of prevalence estimates for these disorders according to the family history method; (4) female sex of index subjects was associated with higher agreement for nearly all disorders; and (5) informants who themselves had a history of the same substance use disorder were more likely to report this disorder in their relatives, which entails the risk of overestimation of the size of familial aggregation. CONCLUSION: Our findings have important implications for the best-estimate procedure applied in family studies.
Resumo:
This paper presents the design and implementation of QRP, an open source proof-of-concept authentication system that uses a two-factorauthentication by combining a password and a camera-equipped mobile phone, acting as an authentication token. QRP is extremely secure asall the sensitive information stored and transmitted is encrypted, but it isalso an easy to use and cost-efficient solution. QRP is portable and can be used securely in untrusted computers. Finally, QRP is able to successfully authenticate even when the phone is offline.
Resumo:
BACKGROUND: The use of the family history method is recommended in family studies as a type of proxy interview of non-participating relatives. However, using different sources of information can result in bias as direct interviews may provide a higher likelihood of assigning diagnoses than family history reports. The aims of the present study were to: 1) compare diagnoses for threshold and subthreshold mood syndromes from interviews to those relying on information from relatives; 2) test the appropriateness of lowering the diagnostic threshold and combining multiple reports from the family history method to obtain comparable prevalence estimates to the interviews; 3) identify factors that influence the likelihood of agreement and reporting of disorders by informants. METHODS: Within a family study, 1621 informant-index subject pairs were identified. DSM-5 diagnoses from direct interviews of index subjects were compared to those derived from family history information provided by their first-degree relatives. RESULTS: 1) Inter-informant agreement was acceptable for Mania, but low for all other mood syndromes. 2) Except for Mania and subthreshold depression, the family history method provided significantly lower prevalence estimates. The gap improved for all other syndromes after lowering the threshold of the family history method. 3) Individuals who had a history of depression themselves were more likely to report depression in their relatives. LIMITATIONS: Low proportion of affected individuals for manic syndromes and lack of independence of data. CONCLUSIONS: The higher likelihood of reporting disorders by affected informants entails the risk of overestimation of the size of familial aggregation of depression.
Resumo:
BACKGROUND: Laparoscopic techniques have been proposed as an alternative to open surgery for therapy of peptic ulcer perforation. They provide better postoperative comfort and absence of parietal complications, but leakage occurs in 5% of cases. We describe a new method combining laparoscopy and endoluminal endoscopy, designed to ensure complete closure of the perforation. METHODS: Six patients with anterior ulcer perforations (4 duodenal, 2 gastric) underwent a concomitant laparoscopy and endoluminal endoscopy with closure of the orifice by an omental plug attracted into the digestive tract. RESULTS: All perforations were sealed. The mean operating time was 72 minutes. The mean hospital stay was 5.5 days. There was no morbidity and no mortality. At the 30-day evaluation all ulcers but one (due to Helicobacter pylori persistence) were healed. CONCLUSIONS: This method is safe and effective. Its advantages compared with open surgery or laparoscopic patching as well as its cost-effectiveness should be studied in prospective randomized trials.
Resumo:
Background: A number of studies have used protein interaction data alone for protein function prediction. Here, we introduce a computational approach for annotation of enzymes, based on the observation that similar protein sequences are more likely to perform the same function if they share similar interacting partners. Results: The method has been tested against the PSI-BLAST program using a set of 3,890 protein sequences from which interaction data was available. For protein sequences that align with at least 40% sequence identity to a known enzyme, the specificity of our method in predicting the first three EC digits increased from 80% to 90% at 80% coverage when compared to PSI-BLAST. Conclusion: Our method can also be used in proteins for which homologous sequences with known interacting partners can be detected. Thus, our method could increase 10% the specificity of genome-wide enzyme predictions based on sequence matching by PSI-BLAST alone.
Resumo:
As more tumor antigens are discovered and as computer-guided T cell epitope prediction programs become more sophisticated, many potential T cell epitopes are synthesized and demonstrated to be antigenic in vitro. However, it is estimated that about 50% of such tumor antigen-specific T cells have not been demonstrated to recognize the naturally presented epitopes due to either technical difficulties, such as T cell cloning which is still challenging for many laboratories; or the predicted T cell epitopes are not generated or not generated in sufficient amounts by the antigen processing machinery. However, to potentially identify clinically relevant vaccine candidate epitopes, it is essential to demonstrate natural antigen presentation. Here we combine the advantages of MHC tetramer and intracellular cytokine staining to sensitively detect natural antigen presentation by tumor cells for epitopes of interest. The novel method does not require T cell cloning or long-term T cell culture. Because the antigen-specific T cells are positively identified, this method is much less influenced by IFNgamma producing cells with unknown specificities and should be widely applicable.
Resumo:
This paper presents an Optimised Search Heuristic that combines a tabu search method with the verification of violated valid inequalities. The solution delivered by the tabu search is partially destroyed by a randomised greedy procedure, and then the valid inequalities are used to guide the reconstruction of a complete solution. An application of the new method to the Job-Shop Scheduling problem is presented.
Resumo:
New method for rearing Spodoptera frugiperda in laboratory shows that larval cannibalism is not obligatory. Here we show, for the first time, that larvae of the fall armyworm (FAW), Spodoptera frugiperda (Lepidoptera, Noctuidae), can be successfully reared in a cohort-based manner with virtually no cannibalism. FAW larvae were reared since the second instar to pupation in rectangular plastic containers containing 40 individuals with a surprisingly ca. 90% larval survivorship. Adult females from the cohort-based method showed fecundity similar to that already reported on literature for larvae reared individually, and fertility higher than 99%, with the advantage of combining economy of time, space and material resources. These findings suggest that the factors affecting cannibalism of FAW larvae in laboratory rearings need to be reevaluated, whilst the new technique also show potential to increase the efficiency of both small and mass FAW rearings.
Resumo:
Two methods of differential isotopic coding of carboxylic groups have been developed to date. The first approach uses d0- or d3-methanol to convert carboxyl groups into the corresponding methyl esters. The second relies on the incorporation of two 18O atoms into the C-terminal carboxylic group during tryptic digestion of proteins in H(2)18O. However, both methods have limitations such as chromatographic separation of 1H and 2H derivatives or overlap of isotopic distributions of light and heavy forms due to small mass shifts. Here we present a new tagging approach based on the specific incorporation of sulfanilic acid into carboxylic groups. The reagent was synthesized in a heavy form (13C phenyl ring), showing no chromatographic shift and an optimal isotopic separation with a 6 Da mass shift. Moreover, sulfanilic acid allows for simplified fragmentation in matrix-assisted laser desorption/ionization (MALDI) due the charge fixation of the sulfonate group at the C-terminus of the peptide. The derivatization is simple, specific and minimizes the number of sample treatment steps that can strongly alter the sample composition. The quantification is reproducible within an order of magnitude and can be analyzed either by electrospray ionization (ESI) or MALDI. Finally, the method is able to specifically identify the C-terminal peptide of a protein by using GluC as the proteolytic enzyme.
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.
Resumo:
Many types of tumors exhibit characteristic chromosomal losses or gains, as well as local amplifications and deletions. Within any given tumor type, sample specific amplifications and deletions are also observed. Typically, a region that is aberrant in more tumors, or whose copy number change is stronger, would be considered as a more promising candidate to be biologically relevant to cancer. We sought for an intuitive method to define such aberrations and prioritize them. We define V, the "volume" associated with an aberration, as the product of three factors: (a) fraction of patients with the aberration, (b) the aberration's length and (c) its amplitude. Our algorithm compares the values of V derived from the real data to a null distribution obtained by permutations, and yields the statistical significance (p-value) of the measured value of V. We detected genetic locations that were significantly aberrant, and combine them with chromosomal arm status (gain/loss) to create a succinct fingerprint of the tumor genome. This genomic fingerprint is used to visualize the tumors, highlighting events that are co-occurring or mutually exclusive. We apply the method on three different public array CGH datasets of Medulloblastoma and Neuroblastoma, and demonstrate its ability to detect chromosomal regions that were known to be altered in the tested cancer types, as well as to suggest new genomic locations to be tested. We identified a potential new subtype of Medulloblastoma, which is analogous to Neuroblastoma type 1.
Resumo:
Within the ENCODE Consortium, GENCODE aimed to accurately annotate all protein-coding genes, pseudogenes, and noncoding transcribed loci in the human genome through manual curation and computational methods. Annotated transcript structures were assessed, and less well-supported loci were systematically, experimentally validated. Predicted exon-exon junctions were evaluated by RT-PCR amplification followed by highly multiplexed sequencing readout, a method we called RT-PCR-seq. Seventy-nine percent of all assessed junctions are confirmed by this evaluation procedure, demonstrating the high quality of the GENCODE gene set. RT-PCR-seq was also efficient to screen gene models predicted using the Human Body Map (HBM) RNA-seq data. We validated 73% of these predictions, thus confirming 1168 novel genes, mostly noncoding, which will further complement the GENCODE annotation. Our novel experimental validation pipeline is extremely sensitive, far more than unbiased transcriptome profiling through RNA sequencing, which is becoming the norm. For example, exon-exon junctions unique to GENCODE annotated transcripts are five times more likely to be corroborated with our targeted approach than with extensive large human transcriptome profiling. Data sets such as the HBM and ENCODE RNA-seq data fail sampling of low-expressed transcripts. Our RT-PCR-seq targeted approach also has the advantage of identifying novel exons of known genes, as we discovered unannotated exons in ~11% of assessed introns. We thus estimate that at least 18% of known loci have yet-unannotated exons. Our work demonstrates that the cataloging of all of the genic elements encoded in the human genome will necessitate a coordinated effort between unbiased and targeted approaches, like RNA-seq and RT-PCR-seq.