876 resultados para Large scale evaluation
Resumo:
Two cost-efficient genome-scale methodologies to assess DNA-methylation are MethylCap-seq and Illumina's Infinium HumanMethylation450 BeadChips (HM450). Objective information regarding the best-suited methodology for a specific research question is scant. Therefore, we performed a large-scale evaluation on a set of 70 brain tissue samples, i.e. 65 glioblastoma and 5 non-tumoral tissues. As MethylCap-seq coverages were limited, we focused on the inherent capacity of the methodology to detect methylated loci rather than a quantitative analysis. MethylCap-seq and HM450 data were dichotomized and performances were compared using a gold standard free Bayesian modelling procedure. While conditional specificity was adequate for both approaches, conditional sensitivity was systematically higher for HM450. In addition, genome-wide characteristics were compared, revealing that HM450 probes identified substantially fewer regions compared to MethylCap-seq. Although results indicated that the latter method can detect more potentially relevant DNA-methylation, this did not translate into the discovery of more differentially methylated loci between tumours and controls compared to HM450. Our results therefore indicate that both methodologies are complementary, with a higher sensitivity for HM450 and a far larger genome-wide coverage for MethylCap-seq, but also that a more comprehensive character does not automatically imply more significant results in biomarker studies.
Resumo:
Marine yeast have been regarded as safe and showing a beneficial impact on biotechnological process. It provides better nutritional and dietary values indicating their potential application as feed supplements in aquaculture. Brown et al. (1996) evaluated all the marine yeasts characterised with high protein content, carbohydrate, good amino acid composition and high levels of saturated fats. However, there is paucity of information on marine yeasts as feed supplements and no feed formulation has been found either in literature or in market supplemented with them. This statement supported by Zhenming et al. (2006) reported still a lack of feed composed of single cell protein (SCP) from marine yeasts with high content of protein and other nutrients. Recent research has shown that marine yeasts also have highly potential uses in food, feed, medical and biofuel industries as well as marine biotechnology (Chi et al., 2009; 2010). Sajeevan et al. (2006; 2009a) and Sarlin and Philip (2011) demonstrates that the marine yeasts Candida sake served as a high quality, inexpensive nutrient source and it had proven immunostimulatory properties for cultured shrimps. This strain has been made part of the culture collection of National Centre for Aquatic Animal Health, Cochin University of Science and Technology as Candida MCCF 101. Over the years marine yeasts have been gaining increased attention in animal feed industry due to their nutritional value and immune boosting property.Therefore, the present study was undertaken, and focused on the nutritional quality, optimization of large scale production and evaluation of its protective effect on Koi carp from Aeromonas infection
Resumo:
We compare the characteristics of synthetic European droughts generated by the HiGEM1 coupled climate model run with present day atmospheric composition with observed drought events extracted from the CRU TS3 data set. The results demonstrate consistency in both the rate of drought occurrence and the spatiotemporal structure of the events. Estimates of the probability density functions for event area, duration and severity are shown to be similar with confidence > 90%. Encouragingly, HiGEM is shown to replicate the extreme tails of the observed distributions and thus the most damaging European drought events. The soil moisture state is shown to play an important role in drought development. Once a large-scale drought has been initiated it is found to be 50% more likely to continue if the local soil moisture is below the 40th percentile. In response to increased concentrations of atmospheric CO2, the modelled droughts are found to increase in duration, area and severity. The drought response can be largely attributed to temperature driven changes in relative humidity. 1 HiGEM is based on the latest climate configuration of the Met Office Hadley Centre Unified Model (HadGEM1) with the horizontal resolution increased to 1.25 x 0.83 degrees in longitude and latitude in the atmosphere and 1/3 x 1/3 degrees in the ocean.
Resumo:
Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^
Resumo:
A review of literature related to appointment-keeping served as the basis for the development of an organizational paradigm for the study of appointment-keeping in the Beta-blocker Heart Attack Trial (BHAT). Features of the organizational environment, demographic characteristics of BHAT enrollees, organizational structure and processes and previous organizational performance variables were measured so as to provide exploratory information relating to the appointment-keeping behavior of 3,837 participants enrolled at thirty-two Clinical Centers. Results suggest that the social context of individual behavior is an important consideration for the understanding of patient compliance. In particular, the degree to which previous organizational performance--as measured by obtaining recruitment goals--and the ability to utilize resources had particularly strong bivariate associations with appointment-keeping. Implications for future theory development, research and practical implications were provided as was a suggestion for the development of multidisciplinary research efforts conducted within the context of Centers for the study and application of adherence behaviors. ^
Resumo:
This paper describes the main goals and outcomes of the EU-funded Framework 7 project entitled Semantic Evaluation at Large Scale (SEALS). The growth and success of the Semantic Web is built upon a wide range of Semantic technologies from ontology engineering tools through to semantic web service discovery and semantic search. The evaluation of such technologies ? and, indeed, assessments of their mutual compatibility ? is critical for their sustained improvement and adoption. The SEALS project is creating an open and sustainable platform on which all aspects of an evaluation can be hosted and executed and has been designed to accommodate most technology types. It is envisaged that the platform will become the de facto repository of test datasets and will allow anyone to organise, execute and store the results of technology evaluations free of charge and without corporate bias. The demonstration will show how individual tools can be prepared for evaluation, uploaded to the platform, evaluated according to some criteria and the subsequent results viewed. In addition, the demonstration will show the flexibility and power of the SEALS Platform for evaluation organisers by highlighting some of the key technologies used.
Resumo:
Objectives: To conduct an independent evaluation of the first phase of the Health Foundation's Safer Patients Initiative (SPI), and to identify the net additional effect of SPI and any differences in changes in participating and non-participating NHS hospitals. Design: Mixed method evaluation involving five substudies, before and after design. Setting: NHS hospitals in United Kingdom. Participants: Four hospitals (one in each country in the UK) participating in the first phase of the SPI (SPI1); 18 control hospitals. Intervention: The SPI1 was a compound (multicomponent) organisational intervention delivered over 18 months that focused on improving the reliability of specific frontline care processes in designated clinical specialties and promoting organisational and cultural change. Results: Senior staff members were knowledgeable and enthusiastic about SPI1. There was a small (0.08 points on a 5 point scale) but significant (P<0.01) effect in favour of the SPI1 hospitals in one of 11 dimensions of the staff questionnaire (organisational climate). Qualitative evidence showed only modest penetration of SPI1 at medical ward level. Although SPI1 was designed to engage staff from the bottom up, it did not usually feel like this to those working on the wards, and questions about legitimacy of some aspects of SPI1 were raised. Of the five components to identify patients at risk of deterioration - monitoring of vital signs (14 items); routine tests (three items); evidence based standards specific to certain diseases (three items); prescribing errors (multiple items from the British National Formulary); and medical history taking (11 items) - there was little net difference between control and SPI1 hospitals, except in relation to quality of monitoring of acute medical patients, which improved on average over time across all hospitals. Recording of respiratory rate increased to a greater degree in SPI1 than in control hospitals; in the second six hours after admission recording increased from 40% (93) to 69% (165) in control hospitals and from 37% (141) to 78% (296) in SPI1 hospitals (odds ratio for "difference in difference" 2.1, 99% confidence interval 1.0 to 4.3; P=0.008). Use of a formal scoring system for patients with pneumonia also increased over time (from 2% (102) to 23% (111) in control hospitals and from 2% (170) to 9% (189) in SPI1 hospitals), which favoured controls and was not significant (0.3, 0.02 to 3.4; P=0.173). There were no improvements in the proportion of prescription errors and no effects that could be attributed to SPI1 in non-targeted generic areas (such as enhanced safety culture). On some measures, the lack of effect could be because compliance was already high at baseline (such as use of steroids in over 85% of cases where indicated), but even when there was more room for improvement (such as in quality of medical history taking), there was no significant additional net effect of SPI1. There were no changes over time or between control and SPI1 hospitals in errors or rates of adverse events in patients in medical wards. Mortality increased from 11% (27) to 16% (39) among controls and decreased from17%(63) to13%(49) among SPI1 hospitals, but the risk adjusted difference was not significant (0.5, 0.2 to 1.4; P=0.085). Poor care was a contributing factor in four of the 178 deaths identified by review of case notes. The survey of patients showed no significant differences apart from an increase in perception of cleanliness in favour of SPI1 hospitals. Conclusions The introduction of SPI1 was associated with improvements in one of the types of clinical process studied (monitoring of vital signs) and one measure of staff perceptions of organisational climate. There was no additional effect of SPI1 on other targeted issues nor on other measures of generic organisational strengthening.
Resumo:
Data envelopment analysis (DEA) is the most widely used methods for measuring the efficiency and productivity of decision-making units (DMUs). The need for huge computer resources in terms of memory and CPU time in DEA is inevitable for a large-scale data set, especially with negative measures. In recent years, wide ranges of studies have been conducted in the area of artificial neural network and DEA combined methods. In this study, a supervised feed-forward neural network is proposed to evaluate the efficiency and productivity of large-scale data sets with negative values in contrast to the corresponding DEA method. Results indicate that the proposed network has some computational advantages over the corresponding DEA models; therefore, it can be considered as a useful tool for measuring the efficiency of DMUs with (large-scale) negative data.
Resumo:
The seminal multiple view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis methodology. Although seminal, these benchmark datasets are limited in scope with few reference scenes. Here, we try to take these works a step further by proposing a new multi-view stereo dataset, which is an order of magnitude larger in number of scenes and with a significant increase in diversity. Specifically, we propose a dataset containing 80 scenes of large variability. Each scene consists of 49 or 64 accurate camera positions and reference structured light scans, all acquired by a 6-axis industrial robot. To apply this dataset we propose an extension of the evaluation protocol from the Middlebury evaluation, reflecting the more complex geometry of some of our scenes. The proposed dataset is used to evaluate the state of the art multiview stereo algorithms of Tola et al., Campbell et al. and Furukawa et al. Hereby we demonstrate the usability of the dataset as well as gain insight into the workings and challenges of multi-view stereopsis. Through these experiments we empirically validate some of the central hypotheses of multi-view stereopsis, as well as determining and reaffirming some of the central challenges.
Resumo:
In spite of the significant amount of scientific work in Wireless Sensor Networks (WSNs), there is a clear lack of effective, feasible and usable WSN system architectures that address both functional and non-functional requirements in an integrated fashion. This poster abstract outlines the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON relies on a hierarchical network architecture together with integrated middleware and command&control mechanisms. It has been designed to use standard commercially– available technologies, while maintaining as much flexibility as possible to meet specific applications’ requirements. The EMMON WSN architecture has been validated through extensive simulation and experimental evaluation, including through a 300+ node test-bed, the largest WSN test-bed in Europe to date
Resumo:
Most research work on WSNs has focused on protocols or on specific applications. There is a clear lack of easy/ready-to-use WSN technologies and tools for planning, implementing, testing and commissioning WSN systems in an integrated fashion. While there exists a plethora of papers about network planning and deployment methodologies, to the best of our knowledge none of them helps the designer to match coverage requirements with network performance evaluation. In this paper we aim at filling this gap by presenting an unified toolset, i.e., a framework able to provide a global picture of the system, from the network deployment planning to system test and validation. This toolset has been designed to back up the EMMON WSN system architecture for large-scale, dense, real-time embedded monitoring. It includes network deployment planning, worst-case analysis and dimensioning, protocol simulation and automatic remote programming and hardware testing tools. This toolset has been paramount to validate the system architecture through DEMMON1, the first EMMON demonstrator, i.e., a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
Wireless sensor networks (WSNs) have attracted growing interest in the last decade as an infrastructure to support a diversity of ubiquitous computing and cyber-physical systems. However, most research work has focused on protocols or on specific applications. As a result, there remains a clear lack of effective, feasible and usable system architectures that address both functional and non-functional requirements in an integrated fashion. In this paper, we outline the EMMON system architecture for large-scale, dense, real-time embedded monitoring. EMMON provides a hierarchical communication architecture together with integrated middleware and command and control software. It has been designed to use standard commercially-available technologies, while maintaining as much flexibility as possible to meet specific applications requirements. The EMMON architecture has been validated through extensive simulation and experimental evaluation, including a 300+ node test-bed, which is, to the best of our knowledge, the largest single-site WSN test-bed in Europe to date.
Resumo:
High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy
Resumo:
Despite the high prevalence of colon cancer in the world and the great interest in targeted anti-cancer therapy, only few tumor-specific gene products have been identified that could serve as targets for the immunological treatment of colorectal cancers. The aim of our study was therefore to identify frequently expressed colon cancer-specific antigens. We performed a large-scale analysis of genes expressed in normal colon and colon cancer tissues isolated from colorectal cancer patients using massively parallel signal sequencing (MPSS). Candidates were additionally subjected to experimental evaluation by semi-quantitative RT-PCR on a cohort of colorectal cancer patients. From a pool of more than 6000 genes identified unambiguously in the analysis, we found 2124 genes that were selectively expressed in colon cancer tissue and 147 genes that were differentially expressed to a significant degree between normal and cancer cells. Differential expression of many genes was confirmed by RT-PCR on a cohort of patients. Despite the fact that deregulated genes were involved in many different cellular pathways, we found that genes expressed in the extracellular space were significantly over-represented in colorectal cancer. Strikingly, we identified a transcript from a chromosome X-linked member of the human endogenous retrovirus (HERV) H family that was frequently and selectively expressed in colon cancer but not in normal tissues. Our data suggest that this sequence should be considered as a target of immunological interventions against colorectal cancer.