943 resultados para Software analysis
Resumo:
Although correspondence analysis is now widely available in statistical software packages and applied in a variety of contexts, notably the social and environmental sciences, there are still some misconceptions about this method as well as unresolved issues which remain controversial to this day. In this paper we hope to settle these matters, namely (i) the way CA measures variance in a two-way table and how to compare variances between tables of different sizes, (ii) the influence, or rather lack of influence, of outliers in the usual CA maps, (iii) the scaling issue and the biplot interpretation of maps,(iv) whether or not to rotate a solution, and (v) statistical significance of results.
Resumo:
Accurate detection of subpopulation size determinations in bimodal populations remains problematic yet it represents a powerful way by which cellular heterogeneity under different environmental conditions can be compared. So far, most studies have relied on qualitative descriptions of population distribution patterns, on population-independent descriptors, or on arbitrary placement of thresholds distinguishing biological ON from OFF states. We found that all these methods fall short of accurately describing small population sizes in bimodal populations. Here we propose a simple, statistics-based method for the analysis of small subpopulation sizes for use in the free software environment R and test this method on real as well as simulated data. Four so-called population splitting methods were designed with different algorithms that can estimate subpopulation sizes from bimodal populations. All four methods proved more precise than previously used methods when analyzing subpopulation sizes of transfer competent cells arising in populations of the bacterium Pseudomonas knackmussii B13. The methods' resolving powers were further explored by bootstrapping and simulations. Two of the methods were not severely limited by the proportions of subpopulations they could estimate correctly, but the two others only allowed accurate subpopulation quantification when this amounted to less than 25% of the total population. In contrast, only one method was still sufficiently accurate with subpopulations smaller than 1% of the total population. This study proposes a number of rational approximations to quantifying small subpopulations and offers an easy-to-use protocol for their implementation in the open source statistical software environment R.
Resumo:
El presente manual de uso del software de visualización de datos “Ocean Data View” (ODV) describe la exploración, análisis y visualización de datos oceanográficos según el formato de la colección mundial de base de datos del océano “World Ocean Database” (WOD). El manual comprende 6 ejercicios prácticos donde se describe paso a paso la creación de las metavariables, la importación de los datos y su visualización mediante mapas de latitud, longitud y gráficos de dispersión, secciones verticales y series de tiempo. Se sugiere el uso extensivo del ODV para la visualización de datos oceanográficos por el personal científico del IMARPE.
Resumo:
BACKGROUND: Finding genes that are differentially expressed between conditions is an integral part of understanding the molecular basis of phenotypic variation. In the past decades, DNA microarrays have been used extensively to quantify the abundance of mRNA corresponding to different genes, and more recently high-throughput sequencing of cDNA (RNA-seq) has emerged as a powerful competitor. As the cost of sequencing decreases, it is conceivable that the use of RNA-seq for differential expression analysis will increase rapidly. To exploit the possibilities and address the challenges posed by this relatively new type of data, a number of software packages have been developed especially for differential expression analysis of RNA-seq data. RESULTS: We conducted an extensive comparison of eleven methods for differential expression analysis of RNA-seq data. All methods are freely available within the R framework and take as input a matrix of counts, i.e. the number of reads mapping to each genomic feature of interest in each of a number of samples. We evaluate the methods based on both simulated data and real RNA-seq data. CONCLUSIONS: Very small sample sizes, which are still common in RNA-seq experiments, impose problems for all evaluated methods and any results obtained under such conditions should be interpreted with caution. For larger sample sizes, the methods combining a variance-stabilizing transformation with the 'limma' method for differential expression analysis perform well under many different conditions, as does the nonparametric SAMseq method.
Resumo:
Introduction. This paper studies the situation of research on Catalan literature between 1976 and 2003 by carrying out a bibliometric and social network analysis of PhD theses defended in Spain. It has a dual aim: to present interesting results for the discipline and to demonstrate the methodological efficacy of scientometric tools in the humanities, a field in which they are often neglected due to the difficulty of gathering data. Method. The analysis was performed on 151 records obtained from the TESEO database of PhD theses. The quantitative estimates include the use of the UCINET and Pajek software packages. Authority control was performed on the records. Analysis. Descriptive statistics were used to describe the sample and the distribution of responses to each question. Sex differences on key questions were analysed using the Chi-squared test. Results. The value of the figures obtained is demonstrated. The information obtained on the topic and the periods studied in the theses, and on the actors involved (doctoral students, thesis supervisors and members of defence committees), provide important insights into the mechanisms of humanities disciplines. The main research tendencies of Catalan literature are identified. It is observed that the composition of members of the thesis defence committees follows Lotka's Law. Conclusions. Bibliometric analysis and social network analysis may be especially useful in the humanities and in other fields which are lacking in scientometric data in comparison with the experimental sciences.
Molecular analysis of the bacterial diversity in a specialized consortium for diesel oil degradation
Resumo:
Diesel oil is a compound derived from petroleum, consisting primarily of hydrocarbons. Poor conditions in transportation and storage of this product can contribute significantly to accidental spills causing serious ecological problems in soil and water and affecting the diversity of the microbial environment. The cloning and sequencing of the 16S rRNA gene is one of the molecular techniques that allows estimation and comparison of the microbial diversity in different environmental samples. The aim of this work was to estimate the diversity of microorganisms from the Bacteria domain in a consortium specialized in diesel oil degradation through partial sequencing of the 16S rRNA gene. After the extraction of DNA metagenomics, the material was amplified by PCR reaction using specific oligonucleotide primers for the 16S rRNA gene. The PCR products were cloned into a pGEM-T-Easy vector (Promega), and Escherichia coli was used as the host cell for recombinant DNAs. The partial clone sequencing was obtained using universal oligonucleotide primers from the vector. The genetic library obtained generated 431 clones. All the sequenced clones presented similarity to phylum Proteobacteria, with Gammaproteobacteria the most present group (49.8 % of the clones), followed by Alphaproteobacteira (44.8 %) and Betaproteobacteria (5.4 %). The Pseudomonas genus was the most abundant in the metagenomic library, followed by the Parvibaculum and the Sphingobium genus, respectively. After partial sequencing of the 16S rRNA, the diversity of the bacterial consortium was estimated using DOTUR software. When comparing these sequences to the database from the National Center for Biotechnology Information (NCBI), a strong correlation was found between the data generated by the software used and the data deposited in NCBI.
Resumo:
The quadrennial need study was developed to assist in identifying county highway financial needs (construction, rehabilitation, maintenance, and administration) and in the distribution of the road use tax fund (RUTF) among the counties in the state. During the period since the need study was first conducted using HWYNEEDS software, between 1982 and 1998, there have been large fluctuations in the level of funds distributed to individual counties. A recent study performed by Jim Cable (HR-363, 1993), found that one of the major factors affecting the volatility in the level of fluctuations is the quality of the pavement condition data collected and the accuracy of these data. In 1998, the Center for Transportation Research and Education researchers (Maze and Smadi) completed a project to study the feasibility of using automated pavement condition data collected for the Iowa Pavement Management Program (IPMP) for the paved county roads to be used in the HWYNEEDS software (TR-418). The automated condition data are objective and also more current since they are collected in a two year cycle compared to the 10-year cycle used by HWYNEEDS right now. The study proved the use of the automated condition data in HWYNEEDS would be feasible and beneficial in educing fluctuations when applied to a pilot study area. In another recommendation from TR-418, the researchers recommended a full analysis and investigation of HWYNEEDS methodology and parameters (for more information on the project, please review the TR-418 project report). The study reported in this document builds on the previous study on using the automated condition data in HWYNEEDS and covers the analysis and investigation of the HWYNEEDS computer program methodology and parameters. The underlying hypothesis for this study is thatalong with the IPMP automated condition data, some changes need to be made to HWYNEEDS parameters to accommodate the use of the new data, which will stabilize the process of allocating resources and reduce fluctuations from one quadrennial need study to another. Another objective of this research is to investigate the gravel roads needs and study the feasibility of developing a more objective approach to determining needs on the counties gravel road network. This study identifies new procedures by which the HWYNEEDS computer program is used to conduct the quadrennial needs study on paved roads. Also, a new procedure will be developed to determine gravel roads needs outside of the HWYNEED program. Recommendations are identified for the new procedures and also in terms of making changes to the current quadrennial need study. Future research areas are also identified.
Resumo:
OBJECTIVE: To evaluate an automated seizure detection (ASD) algorithm in EEGs with periodic and other challenging patterns. METHODS: Selected EEGs recorded in patients over 1year old were classified into four groups: A. Periodic lateralized epileptiform discharges (PLEDs) with intermixed electrical seizures. B. PLEDs without seizures. C. Electrical seizures and no PLEDs. D. No PLEDs or seizures. Recordings were analyzed by the Persyst P12 software, and compared to the raw EEG, interpreted by two experienced neurophysiologists; Positive percent agreement (PPA) and false-positive rates/hour (FPR) were calculated. RESULTS: We assessed 98 recordings (Group A=21 patients; B=29, C=17, D=31). Total duration was 82.7h (median: 1h); containing 268 seizures. The software detected 204 (=76.1%) seizures; all ictal events were captured in 29/38 (76.3%) patients; in only in 3 (7.7%) no seizures were detected. Median PPA was 100% (range 0-100; interquartile range 50-100), and the median FPR 0/h (range 0-75.8; interquartile range 0-4.5); however, lower performances were seen in the groups containing periodic discharges. CONCLUSION: This analysis provides data regarding the yield of the ASD in a particularly difficult subset of EEG recordings, showing that periodic discharges may bias the results. SIGNIFICANCE: Ongoing refinements in this technique might enhance its utility and lead to a more extensive application.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
MicroRNAs (miRs) are involved in the pathogenesis of several neoplasms; however, there are no data on their expression patterns and possible roles in adrenocortical tumors. Our objective was to study adrenocortical tumors by an integrative bioinformatics analysis involving miR and transcriptomics profiling, pathway analysis, and a novel, tissue-specific miR target prediction approach. Thirty-six tissue samples including normal adrenocortical tissues, benign adenomas, and adrenocortical carcinomas (ACC) were studied by simultaneous miR and mRNA profiling. A novel data-processing software was used to identify all predicted miR-mRNA interactions retrieved from PicTar, TargetScan, and miRBase. Tissue-specific target prediction was achieved by filtering out mRNAs with undetectable expression and searching for mRNA targets with inverse expression alterations as their regulatory miRs. Target sets and significant microarray data were subjected to Ingenuity Pathway Analysis. Six miRs with significantly different expression were found. miR-184 and miR-503 showed significantly higher, whereas miR-511 and miR-214 showed significantly lower expression in ACCs than in other groups. Expression of miR-210 was significantly lower in cortisol-secreting adenomas than in ACCs. By calculating the difference between dCT(miR-511) and dCT(miR-503) (delta cycle threshold), ACCs could be distinguished from benign adenomas with high sensitivity and specificity. Pathway analysis revealed the possible involvement of G2/M checkpoint damage in ACC pathogenesis. To our knowledge, this is the first report describing miR expression patterns and pathway analysis in sporadic adrenocortical tumors. miR biomarkers may be helpful for the diagnosis of adrenocortical malignancy. This tissue-specific target prediction approach may be used in other tumors too.
Resumo:
Introduction: Survival of children born prematurely or with very low birth weight has increased dramatically, but the long term developmental outcome remains unknown. Many children have deficits in cognitive capacities, in particular involving executive domains and those disabilities are likely to involve a central nervous system deficit. To understand their neurostructural origin, we use DTI. Structurally segregated and functionally regions of the cerebral cortex are interconnected by a dense network of axonal pathways. We noninvasively map these pathways across cortical hemispheres and construct normalized structural connection matrices derived from DTI MR tractography. Group comparisons of brain connectivity reveal significant changes in fiber density in case of children with poor intrauterine grown and extremely premature children (gestational age<28 weeks at birth) compared to control subjects. This changes suggest a link between cortico-axonal pathways and the central nervous system deficit. Methods: Sixty premature born infants (5-6 years old) were scanned on clinical 3T scanner (Magnetom Trio, Siemens Medical Solutions, Erlangen, Germany) at two hospitals (HUG, Geneva and CHUV, Lausanne). For each subject, T1-weighted MPRAGE images (TR/TE=2500/2.91,TI=1100, resolution=1x1x1mm, matrix=256x154) and DTI images (30 directions, TR/TE=10200/107, in-plane resolution=1.8x1.8x2mm, 64 axial, matrix=112x112) were acquired. Parent(s) provided written consent on prior ethical board approval. The extraction of the Whole Brain Structural Connectivity Matrix was performed following (Cammoun, 2009 and Hagmann, 2008). The MPARGE images were registered using an affine registration to the non-weighted-DTI and WM-GM segmentation performed on it. In order to have equal anatomical localization among subjects, 66 cortical regions with anatomical landmarks were created using the curvature information, i.e. sulcus and gyrus (Cammoun et al, 2007; Fischl et al, 2004; Desikan et al, 2006) with freesurfer software (http://surfer.nmr.mgh.harvard.edu/). Tractography was performed in WM using an algorithm especially designed for DTI/DSI data (Hagmann et al., 2007) and both information were then combined in a matrix. Each row and column of the matrix corresponds to a particular ROI. Each cell of index (i,j) represents the fiber density of the bundle connecting the ROIs i and j. Subdividing each cortical region, we obtained 4 Connectivity Matrices of different resolution (33, 66, 125 and 250 ROI/hemisphere) for each subject . Subjects were sorted in 3 different groups, namely (1) control, (2) Intrauterine Growth Restriction (IUGR), (3) Extreme Prematurity (EP), depending on their gestational age, weight and percentile-weight score at birth. Group-to-group comparisons were performed between groups (1)-(2) and (1)-(3). The mean age at examination of the three groups were similar. Results: Quantitative analysis were performed between groups to determine fibers density differences. For each group, a mean connectivity matrix with 33ROI/hemisphere resolution was computed. On the other hand, for all matrix resolutions (33,66,125,250 ROI/hemisphere), the number of bundles were computed and averaged. As seen in figure 1, EP and IUGR subjects present an overall reduction of fibers density in both interhemispherical and intrahemispherical connections. This is given quantitatively in table 1. IUGR subjects presents a higher percentage of missing fiber bundles than EP when compared to control subjects (~16% against 11%). When comparing both groups to control subjects, for the EP subjects, the occipito-parietal regions seem less interhemispherically connected whilst the intrahemispherical networks present lack of fiber density in the lymbic system. Children born with IUGR, have similar reductions in interhemispherical connections than the EP. However, the cuneus and precuneus connections with the precentral and paracentral lobe are even lower than in the case of the EP. For the intrahemispherical connections the IUGR group preset a loss of fiber density between the deep gray matter structures (striatum) and the frontal and middlefrontal poles, connections typically involved in the control of executive functions. For the qualitative analysis, a t-test comparing number of bundles (p-value<0.05) gave some preliminary significant results (figure 2). Again, even if both IUGR and EP appear to have significantly less connections comparing to the control subjects, the IUGR cohort seems to present a higher lack of fiber density specially relying the cuneus, precuneus and parietal areas. In terms of fiber density, preliminary Wilcoxon tests seem to validate the hypothesis set by the previous analysis. Conclusions: The goal of this study was to determine the effect of extreme prematurity and poor intrauterine growth on neurostructural development at the age of 6 years-old. This data indicates that differences in connectivity may well be the basis for the neurostructural and neuropsychological deficit described in these populations in the absence of overt brain lesions (Inder TE, 2005; Borradori-Tolsa, 2004; Dubois, 2008). Indeed, we suggest that IUGR and prematurity leads to alteration of connectivity between brain structures, especially in occipito-parietal and frontal lobes for EP and frontal and middletemporal poles for IUGR. Overall, IUGR children have a higher loss of connectivity in the overall connectivity matrix than EP children. In both cases, the localized alteration of connectivity suggests a direct link between cortico-axonal pathways and the central nervous system deficit. Our next step is to link these connectivity alterations to the performance in executive function tests.
Resumo:
This study aimed to investigate the impact of a communication skills training (CST) in oncology on clinicians' linguistic strategies. A verbal communication analysis software (Logiciel d'Analyse de la Communication Verbale) was used to compare simulated patients interviews with oncology clinicians who participated in CST (N = 57) (pre/post with a 6-month interval) with a control group of oncology clinicians who did not (N = 56) (T1/T2 with a 6-month interval). A significant improvement of linguistic strategies related to biomedical, psychological and social issues was observed. Analysis of linguistic aspects of videotaped interviews might become in the future a part of individualised feedback in CST and utilised as a marker for an evaluation of training.
Resumo:
This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way.
Resumo:
Research has shown that one of the major contributing factors in early joint deterioration of portland cement concrete (PCC) pavement is the quality of the coarse aggregate. Conventional physical and freeze/thaw tests are slow and not satisfactory in evaluating aggregate quality. In the last ten years the Iowa DOT has been evaluating X-ray analysis and other new technologies to predict aggregate durability in PCC pavement. The objective of this research is to evaluate thermogravimetric analysis (TGA) of carbonate aggregate. The TGA testing has been conducted with a TA 2950 Thermogravimetric Analyzer. The equipment is controlled by an IBM compatible computer. A "TA Hi-RES" (trademark) software package allows for rapid testing while retaining high resolution. The carbon dioxide is driven off the dolomite fraction between 705 deg C and 745 deg C and off the calcite fraction between 905 deg C and 940 deg C. The graphical plot of the temperature and weight loss using the same sample size and test procedure demonstrates that the test is very accurate and repeatable. A substantial number of both dolomites and limestones (calcites) have been subjected to TGA testing. The slopes of the weight loss plot prior to the dolomite and calcite transitions does correlate with field performance. The noncarbonate fraction, which correlates to the acid insolubles, can be determined by TGA for most calcites and some dolomites. TGA has provided information that can be used to help predict the quality of carbonate aggregate.