62 resultados para Screen Capture Software
Resumo:
BACKGROUND/AIMS: Cannabis use is a growing challenge for public health, calling for adequate instruments to identify problematic consumption patterns. The Cannabis Use Disorders Identification Test (CUDIT) is a 10-item questionnaire used for screening cannabis abuse and dependency. The present study evaluated that screening instrument. METHODS: In a representative population sample of 5,025 Swiss adolescents and young adults, 593 current cannabis users replied to the CUDIT. Internal consistency was examined by means of Cronbach's alpha and confirmatory factor analysis. In addition, the CUDIT was compared to accepted concepts of problematic cannabis use (e.g. using cannabis and driving). ROC analyses were used to test the CUDIT's discriminative ability and to determine an appropriate cut-off. RESULTS: Two items ('injuries' and 'hours being stoned') had loadings below 0.5 on the unidimensional construct and correlated lower than 0.4 with the total CUDIT score. All concepts of problematic cannabis use were related to CUDIT scores. An ideal cut-off between six and eight points was found. CONCLUSIONS: Although the CUDIT seems to be a promising instrument to identify problematic cannabis use, there is a need to revise some of its items.
Resumo:
SUMMARY: We present a tool designed for visualization of large-scale genetic and genomic data exemplified by results from genome-wide association studies. This software provides an integrated framework to facilitate the interpretation of SNP association studies in genomic context. Gene annotations can be retrieved from Ensembl, linkage disequilibrium data downloaded from HapMap and custom data imported in BED or WIG format. AssociationViewer integrates functionalities that enable the aggregation or intersection of data tracks. It implements an efficient cache system and allows the display of several, very large-scale genomic datasets. AVAILABILITY: The Java code for AssociationViewer is distributed under the GNU General Public Licence and has been tested on Microsoft Windows XP, MacOSX and GNU/Linux operating systems. It is available from the SourceForge repository. This also includes Java webstart, documentation and example datafiles.
Resumo:
Extensible Markup Language (XML) is a generic computing language that provides an outstanding case study of commodification of service standards. The development of this language in the late 1990s marked a shift in computer science as its extensibility let store and share any kind of data. Many office suites software rely on it. The chapter highlights how the largest multinational firms pay special attention to gain a recognised international standard for such a major technological innovation. It argues that standardisation processes affects market structures and can lead to market capture. By examining how a strategic use of standardisation arenas can generate profits, it shows that Microsoft succeeded in making its own technical solution a recognised ISO standard in 2008, while the same arena already adopted two years earlier the open source standard set by IBM and Sun Microsystems. Yet XML standardisation also helped to establish a distinct model of information technology services at the expense of Microsoft monopoly on proprietary software
Resumo:
The ongoing aging of the population will lead to an increasing prevalence of chronic diseases and functional limitation. Preventative measures need to be promoted to prevent health care utilization and health costs explosion. Among these measures, those aimed at promoting early recognition of chronic conditions associated with functional decline, will have to be reinforced. This paper proposes simple, feasible, and efficient procedures to screen, in primary care practices, common geriatric conditions, as cognitive impairment, gait impairment, hearing and vision impairment or functional limitation.
Resumo:
Eukaryotic transcription is tightly regulated by transcriptional regulatory elements, even though these elements may be located far away from their target genes. It is now widely recognized that these regulatory elements can be brought in close proximity through the formation of chromatin loops, and that these loops are crucial for transcriptional regulation of their target genes. The chromosome conformation capture (3C) technique presents a snapshot of long-range interactions, by fixing physically interacting elements with formaldehyde, digestion of the DNA, and ligation to obtain a library of unique ligation products. Recently, several large-scale modifications to the 3C technique have been presented. Here, we describe chromosome conformation capture sequencing (4C-seq), a high-throughput version of the 3C technique that combines the 3C-on-chip (4C) protocol with next-generation Illumina sequencing. The method is presented for use in mammalian cell lines, but can be adapted to use in mammalian tissues and any other eukaryotic genome.
Resumo:
The graffiti on pottery discovered on the site of Aventicum (Avenches, VD/Switzerland) form the largest corpus of minor inscriptions of the Roman Empire studied until now. Indeed, a total of 1828 graffiti have been found. The reading and the recording of the inscriptions are generally dependent on the state of conservation of the graffito and its support. In numerous cases, only a pale shadow of the inscription is visible, which makes traditional observations, such as visual observations with the naked eye, unsuitable for its decipherment. Consequently, advanced techniques have been applied for enhancing the readability of such inscriptions. In our paper we show the efficiency of 3D laser profilometry as well as high resolution photography as powerful means to decipher illegible engraved inscriptions. The use of such analyses to decipher graffiti on pottery or on other materials enables a better understanding of minor inscriptions and improves the knowledge of the daily life of ancient populations substantially.
Resumo:
The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.
Resumo:
The SOS screen, as originally described by Perkins et al. (1999) [7], was setup with the aim of identifying Arabidopsis functions that might potentially be involved in the DNA metabolism. Such functions, when expressed in bacteria, are prone to disturb replication and thus trigger the SOS response. Consistently, expression of AtRAD51 and AtDMC1 induced the SOS response in bacteria, even affecting E. coli viability. 100 SOS-inducing cDNAs were isolated from a cDNA library constructed from an Arabidopsis cell suspension that was found to highly express meiotic genes. A large proportion of these SOS(+) candidates are clearly related to the DNA metabolism, others could be involved in the RNA metabolism, while the remaining cDNAs encode either totally unknown proteins or proteins that were considered as irrelevant. Seven SOS(+) candidate genes are induced following gamma irradiation. The in planta function of several of the SOS-inducing clones was investigated using T-DNA insertional mutants or RNA interference. Only one SOS(+) candidate, among those examined, exhibited a defined phenotype: silenced plants for DUT1 were sensitive to 5-fluoro-uracil (5FU), as is the case of the leaky dut-1 mutant in E. coli that are affected in dUTPase activity. dUTPase is essential to prevent uracil incorporation in the course of DNA replication.
Resumo:
Background: TIDratio indirectly reflects myocardial ischemia and is correlated with cardiacprognosis. We aimed at comparing the influence of three different softwarepackages for the assessment of TID using Rb-82 cardiac PET/CT. Methods: Intotal, data of 30 patients were used based on normal myocardial perfusion(SSS<3 and SRS<3) and stress myocardial blood flow 2mL/min/g)assessed by Rb-82 cardiac PET/CT. After reconstruction using 2D OSEM (2Iterations, 28 subsets), 3-D filtering (Butterworth, order=10, ωc=0.5), data were automatically processed, and then manually processed fordefining identical basal and apical limits on both stress and rest images.TIDratio were determined with Myometrix®, ECToolbox® and QGS®software packages. Comparisons used ANOVA, Student t-tests and Lin concordancetest (ρc). Results: All of the 90 processings were successfullyperformed. TID ratio were not statistically different between software packageswhen data were processed automatically (P=0.2) or manually (P=0.17). There was a slight, butsignificant relative overestimation of TID with automatic processing incomparison to manual processing using ECToolbox® (1.07 ± 0.13 vs 1.0± 0.13, P=0.001)and Myometrix® (1.07 ± 0.15 vs 1.01 ± 0.11, P=0.003) but not using QGS®(1.02 ±0.12 vs 1.05 ± 0.11, P=0.16). The best concordance was achieved between ECToolbox®and Myometrix® manual (ρc=0.67) processing.Conclusion: Using automatic or manual mode TID estimation was not significantlyinfluenced by software type. Using Myometrix® or ECToolbox®TID was significantly different between automatic and manual processing, butnot using QGS®. Software package should be account for when definingTID normal reference limits, as well as when used in multicenter studies. QGS®software seemed to be the most operator-independent software package, whileECToolbox® and Myometrix® produced the closest results.
Resumo:
The aim of this study was to determine the effect of using video analysis software on the interrater reliability of visual assessments of gait videos in children with cerebral palsy. Two clinicians viewed the same random selection of 20 sagittal and frontal video recordings of 12 children with cerebral palsy routinely acquired during outpatient rehabilitation clinics. Both observers rated these videos in a random sequence for each lower limb using the Observational Gait Scale, once with standard video software and another with video analysis software (Dartfish(®)) which can perform angle and timing measurements. The video analysis software improved interrater agreement, measured by weighted Cohen's kappas, for the total score (κ 0.778→0.809) and all of the items that required angle and/or timing measurements (knee position mid-stance κ 0.344→0.591; hindfoot position mid-stance κ 0.160→0.346; foot contact mid-stance κ 0.700→0.854; timing of heel rise κ 0.769→0.835). The use of video analysis software is an efficient approach to improve the reliability of visual video assessments.
Resumo:
Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.
Resumo:
Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.
Resumo:
Alpha-D-mannopyranosides are potent FimH antagonists, which inhibit the adhesion of Escherichia coli to highly mannosylated uroplakin Ia on the urothelium and therefore offer an efficient therapeutic opportunity for the treatment and prevention of urinary tract infection. For the evaluation of the therapeutic potential of FimH antagonists, their effect on the disaggregation of E. coli from Candida albicans and guinea pig erythrocytes (GPE) was studied. The mannose-specific binding of E. coli to yeast cells and erythrocytes is mediated by type 1 pili and can be monitored by aggregometry. Maximal aggregation of C. albicans or GPE to E. coli is reached after 600 s. Then the FimH antagonist was added and disaggregation determined by light transmission over a period of 1400 s. A FimH-deleted mutant of E. coli, which does not induce any aggregation, was used in a control experiment. The activities of FimH antagonists are expressed as IC(50)s, the half maximal inhibitory concentration of the disaggregation potential. n-Heptyl alpha-D-mannopyranoside (1) was used as a reference compound and exhibits an IC(50) of 77.14 microM , whereas methyl alpha-D-mannopyranoside (2) does not lead to any disaggregation at concentrations up to 800 microM. o-Chloro-p-[N-(2-ethoxy-3,4-dioxocyclobut-1-enyl)amino]phenyl alpha-D-mannopyranoside (3) shows a 90-fold and 2-chloro-4-nitrophenyl alpha-D-mannopyranoside (4) a 6-fold increased affinity compared to 1. Finally, 4-nitrophenyl alpha-D-mannopyranoside (5) exhibits an activity similar to 1. As negative control, D-galactose (6) was used. The standardized aggregation assay generates concentration-dependent, reproducible data allowing the evaluation of FimH antagonists according to their potency to inhibit E. coli adherence and can therefore be employed to select candidates for experimental and clinical studies for treatment and prevention of urinary tract infections.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.