972 resultados para sequences analysis technology


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: There is growing interest in the potential utility of real-time polymerase chain reaction (PCR) in diagnosing bloodstream infection by detecting pathogen deoxyribonucleic acid (DNA) in blood samples within a few hours. SeptiFast (Roche Diagnostics GmBH, Mannheim, Germany) is a multipathogen probe-based system targeting ribosomal DNA sequences of bacteria and fungi. It detects and identifies the commonest pathogens causing bloodstream infection. As background to this study, we report a systematic review of Phase III diagnostic accuracy studies of SeptiFast, which reveals uncertainty about its likely clinical utility based on widespread evidence of deficiencies in study design and reporting with a high risk of bias. 

Objective: Determine the accuracy of SeptiFast real-time PCR for the detection of health-care-associated bloodstream infection, against standard microbiological culture. 

Design: Prospective multicentre Phase III clinical diagnostic accuracy study using the standards for the reporting of diagnostic accuracy studies criteria. 

Setting: Critical care departments within NHS hospitals in the north-west of England. 

Participants: Adult patients requiring blood culture (BC) when developing new signs of systemic inflammation. 

Main outcome measures: SeptiFast real-time PCR results at species/genus level compared with microbiological culture in association with independent adjudication of infection. Metrics of diagnostic accuracy were derived including sensitivity, specificity, likelihood ratios and predictive values, with their 95% confidence intervals (CIs). Latent class analysis was used to explore the diagnostic performance of culture as a reference standard. 

Results: Of 1006 new patient episodes of systemic inflammation in 853 patients, 922 (92%) met the inclusion criteria and provided sufficient information for analysis. Index test assay failure occurred on 69 (7%) occasions. Adult patients had been exposed to a median of 8 days (interquartile range 4–16 days) of hospital care, had high levels of organ support activities and recent antibiotic exposure. SeptiFast real-time PCR, when compared with culture-proven bloodstream infection at species/genus level, had better specificity (85.8%, 95% CI 83.3% to 88.1%) than sensitivity (50%, 95% CI 39.1% to 60.8%). When compared with pooled diagnostic metrics derived from our systematic review, our clinical study revealed lower test accuracy of SeptiFast real-time PCR, mainly as a result of low diagnostic sensitivity. There was a low prevalence of BC-proven pathogens in these patients (9.2%, 95% CI 7.4% to 11.2%) such that the post-test probabilities of both a positive (26.3%, 95% CI 19.8% to 33.7%) and a negative SeptiFast test (5.6%, 95% CI 4.1% to 7.4%) indicate the potential limitations of this technology in the diagnosis of bloodstream infection. However, latent class analysis indicates that BC has a low sensitivity, questioning its relevance as a reference test in this setting. Using this analysis approach, the sensitivity of the SeptiFast test was low but also appeared significantly better than BC. Blood samples identified as positive by either culture or SeptiFast real-time PCR were associated with a high probability (> 95%) of infection, indicating higher diagnostic rule-in utility than was apparent using conventional analyses of diagnostic accuracy. 

Conclusion: SeptiFast real-time PCR on blood samples may have rapid rule-in utility for the diagnosis of health-care-associated bloodstream infection but the lack of sensitivity is a significant limiting factor. Innovations aimed at improved diagnostic sensitivity of real-time PCR in this setting are urgently required. Future work recommendations include technology developments to improve the efficiency of pathogen DNA extraction and the capacity to detect a much broader range of pathogens and drug resistance genes and the application of new statistical approaches able to more reliably assess test performance in situation where the reference standard (e.g. blood culture in the setting of high antimicrobial use) is prone to error.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Parent–school relationships contribute significantly to the quality of students’ education. The Internet, in turn, has started to influence individuals’ way social communication and most school boards in Ontario now use the Internet to communicate with parents, which helps build parent–school relationships. This project comprised a conceptual analysis of how the Internet enhances parent–school relationships to support Ontario school board administrators seeking to implement such technology. The study’s literature review identified the links between Web 2.0 technology, parent–school relationships, and effective parent engagement. A conceptual framework of the features of Web 2.0 tools that promote social interaction was developed and used to analyze websites of three Ontario school boards. The analysis revealed that school board websites used static features such as email, newsletters, and announcements for communication and did not provide access to parents for providing feedback through Web 2.0 features such as instant messaging. General recommendations were made so that school board administrators have the opportunity to implement changes in their school community with feasible modifications. Overall, Web 2.0-based technologies such as interactive communication tools and social media hold the most promise for enhancing parent–school relationships because they can help not only overcome barriers of time and distance, but also improve the parents’ desire to be engaged in children’s education experiences.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Three dimensional (3D) composites are strong contenders for the structural applications in situations like aerospace,aircraft and automotive industries where multidirectional thermal and mechanical stresses exist. The presence of reinforcement along the thickness direction in 3D composites,increases the through the thickness stiffness and strength properties.The 3D preforms can be manufactured with numerous complex architecture variations to meet the needs of specific applications.For hot structure applications Carbon-Carbon(C-C) composites are generally used,whose property variation with respect to temperature is essential for carrying out the design of hot structures.The thermomechanical behavior of 3D composites is not fully understood and reported.The methodology to find the thermomechanical properties using analytical modelling of 3D woven,3D 4-axes braided and 3D 5-axes braided composites from Representative Unit Cells(RUC's) based on constitutive equations for 3D composites has been dealt in the present study.High Temperature Unidirectional (UD) Carbon-Carbon material properties have been evaluated using analytical methods,viz.,Composite cylinder assemblage Model and Method of Cells based on experiments carried out on Carbon-Carbon fabric composite for a temparature range of 300 degreeK to 2800degreeK.These properties have been used for evaluating the 3D composite properties.From among the existing methods of solution sequences for 3D composites,"3D composite Strength Model" has been identified as the most suitable method.For thegeneration of material properies of RUC's od 3D composites,software has been developed using MATLAB.Correlaton of the analytically determined properties with test results available in literature has been established.Parametric studies on the variation of all the thermomechanical constants for different 3D performs of Carbon-Carbon material have been studied and selection criteria have been formulated for their applications for the hot structures.Procedure for the structural design of hot structures made of 3D Carbon-Carbon composites has been established through the numerical investigations on a Nosecap.Nonlinear transient thermal and nonlinear transient thermo-structural analysis on the Nosecap have been carried out using finite element software NASTRAN.Failure indices have been established for the identified performs,identification of suitable 3D composite based on parametric studies on strength properties and recommendation of this material for Nosecap of RLV based on structural performance have been carried out in this Study.Based on the 3D failure theory the best perform for the Nosecap has been identified as 4-axis 15degree braided composite.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Knowledge discovery in databases is the non-trivial process of identifying valid, novel potentially useful and ultimately understandable patterns from data. The term Data mining refers to the process which does the exploratory analysis on the data and builds some model on the data. To infer patterns from data, data mining involves different approaches like association rule mining, classification techniques or clustering techniques. Among the many data mining techniques, clustering plays a major role, since it helps to group the related data for assessing properties and drawing conclusions. Most of the clustering algorithms act on a dataset with uniform format, since the similarity or dissimilarity between the data points is a significant factor in finding out the clusters. If a dataset consists of mixed attributes, i.e. a combination of numerical and categorical variables, a preferred approach is to convert different formats into a uniform format. The research study explores the various techniques to convert the mixed data sets to a numerical equivalent, so as to make it equipped for applying the statistical and similar algorithms. The results of clustering mixed category data after conversion to numeric data type have been demonstrated using a crime data set. The thesis also proposes an extension to the well known algorithm for handling mixed data types, to deal with data sets having only categorical data. The proposed conversion has been validated on a data set corresponding to breast cancer. Moreover, another issue with the clustering process is the visualization of output. Different geometric techniques like scatter plot, or projection plots are available, but none of the techniques display the result projecting the whole database but rather demonstrate attribute-pair wise analysis

Relevância:

40.00% 40.00%

Publicador:

Resumo:

What are fundamental entities in social networks and what information is contained in social graphs? We will discuss some selected concepts in social network analysis, such as one- and two mode networks, prestige and centrality, and cliques, clans and clubs. Readings: Web tool predicts election results and stock prices, J. Palmer, New Scientist, 07 February (2008) [Protected Access] Optional: Social Network Analysis, Methods and Applications, S. Wasserman and K. Faust (1994)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

What are ways of searching in graphs? In this class, we will discuss basics of link analysis, including Google's PageRank algorithm as an example. Readings: The PageRank Citation Ranking: Bringing Order to the Web, L. Page and S. Brin and R. Motwani and T. Winograd (1998) Stanford Tecnical Report

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have compiled two comprehensive gene expression profiles from mature leaf and immature seed tissue of rice (Oryza sativa ssp. japonica cultivar Nipponbare) using Serial Analysis of Gene Expression (SAGE) technology. Analysis revealed a total of 50 519 SAGE tags, corresponding to 15 131 unique transcripts. Of these, the large majority (approximately 70%) occur only once in both libraries. Unexpectedly, the most abundant transcript (approximately 3% of the total) in the leaf library was derived from a type 3 metallothionein gene. The overall frequency profiles of the abundant tag species from both tissues differ greatly and reveal seed tissue as exhibiting a non-typical pattern of gene expression characterized by an over abundance of a small number of transcripts coding for storage proteins. A high proportion ( approximately 80%) of the abundant tags (> or = 9) matched entries in our reference rice EST database, with many fewer matches for low abundant tags. Singleton transcripts that are common to both tissues were collated to generate a summary of low abundant transcripts that are expressed constitutively in rice tissues. Finally and most surprisingly, a significant number of tags were found to code for antisense transcripts, a finding that suggests a novel mechanism of gene regulation, and may have implications for the use of antisense constructs in transgenic technology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods: In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-eff ectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings: 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months’ follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0∙58, 95% CI 0∙38–0∙89); a β blocker if they had asthma (0∙73, 0∙58–0∙91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0∙51, 0∙34–0∙78). PINCER has a 95% probability of being cost eff ective if the decision-maker’s ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation: The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding: Patient Safety Research Portfolio, Department of Health, England.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Leptospira have a worldwide distribution and include important zoonotic pathogens yet diagnosis and differentiation still tend to rely on traditional bacteriological and serological approaches. In this study a 1.3 kb fragment of the rrs gene (16S rDNA) was sequenced from a panel of 22 control strains, representing serovars within the pathogenic species Leptospira interrogans, Leptospira borgpetersenii, and Leptospira kirschneri, to identify single nucleotide polymorphisms (SNPs). These were identified in the 5' variable region of the 16S sequence and a 181 bp PCR fragment encompassing this region was used for speciation by Denaturing High Performance Liquid Chromatography (D-HPLC). This method was applied to eleven additional species, representing pathogenic, non-pathogenic and intermediate species and was demonstrated to rapidly differentiate all but 2 of the non-pathogenic Leptospira species. The method was applied successfully to infected tissues from field samples proving its value for diagnosing leptospiral infections found in animals in the UK. Crown Copyright (C) 2010 Published by Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.