33 resultados para IOS (Computer operating system)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

SUMMARY: We present a tool designed for visualization of large-scale genetic and genomic data exemplified by results from genome-wide association studies. This software provides an integrated framework to facilitate the interpretation of SNP association studies in genomic context. Gene annotations can be retrieved from Ensembl, linkage disequilibrium data downloaded from HapMap and custom data imported in BED or WIG format. AssociationViewer integrates functionalities that enable the aggregation or intersection of data tracks. It implements an efficient cache system and allows the display of several, very large-scale genomic datasets. AVAILABILITY: The Java code for AssociationViewer is distributed under the GNU General Public Licence and has been tested on Microsoft Windows XP, MacOSX and GNU/Linux operating systems. It is available from the SourceForge repository. This also includes Java webstart, documentation and example datafiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Clinical practice does not always reflect best practice and evidence, partly because of unconscious acts of omission, information overload, or inaccessible information. Reminders may help clinicians overcome these problems by prompting the doctor to recall information that they already know or would be expected to know and by providing information or guidance in a more accessible and relevant format, at a particularly appropriate time. OBJECTIVES: To evaluate the effects of reminders automatically generated through a computerized system and delivered on paper to healthcare professionals on processes of care (related to healthcare professionals' practice) and outcomes of care (related to patients' health condition). SEARCH METHODS: For this update the EPOC Trials Search Co-ordinator searched the following databases between June 11-19, 2012: The Cochrane Central Register of Controlled Trials (CENTRAL) and Cochrane Library (Economics, Methods, and Health Technology Assessment sections), Issue 6, 2012; MEDLINE, OVID (1946- ), Daily Update, and In-process; EMBASE, Ovid (1947- ); CINAHL, EbscoHost (1980- ); EPOC Specialised Register, Reference Manager, and INSPEC, Engineering Village. The authors reviewed reference lists of related reviews and studies.  SELECTION CRITERIA: We included individual or cluster-randomized controlled trials (RCTs) and non-randomized controlled trials (NRCTs) that evaluated the impact of computer-generated reminders delivered on paper to healthcare professionals on processes and/or outcomes of care. DATA COLLECTION AND ANALYSIS: Review authors working in pairs independently screened studies for eligibility and abstracted data. We contacted authors to obtain important missing information for studies that were published within the last 10 years. For each study, we extracted the primary outcome when it was defined or calculated the median effect size across all reported outcomes. We then calculated the median absolute improvement and interquartile range (IQR) in process adherence across included studies using the primary outcome or median outcome as representative outcome. MAIN RESULTS: In the 32 included studies, computer-generated reminders delivered on paper to healthcare professionals achieved moderate improvement in professional practices, with a median improvement of processes of care of 7.0% (IQR: 3.9% to 16.4%). Implementing reminders alone improved care by 11.2% (IQR 6.5% to 19.6%) compared with usual care, while implementing reminders in addition to another intervention improved care by 4.0% only (IQR 3.0% to 6.0%) compared with the other intervention. The quality of evidence for these comparisons was rated as moderate according to the GRADE approach. Two reminder features were associated with larger effect sizes: providing space on the reminder for provider to enter a response (median 13.7% versus 4.3% for no response, P value = 0.01) and providing an explanation of the content or advice on the reminder (median 12.0% versus 4.2% for no explanation, P value = 0.02). Median improvement in processes of care also differed according to the behaviour the reminder targeted: for instance, reminders to vaccinate improved processes of care by 13.1% (IQR 12.2% to 20.7%) compared with other targeted behaviours. In the only study that had sufficient power to detect a clinically significant effect on outcomes of care, reminders were not associated with significant improvements. AUTHORS' CONCLUSIONS: There is moderate quality evidence that computer-generated reminders delivered on paper to healthcare professionals achieve moderate improvement in process of care. Two characteristics emerged as significant predictors of improvement: providing space on the reminder for a response from the clinician and providing an explanation of the reminder's content or advice. The heterogeneity of the reminder interventions included in this review also suggests that reminders can improve care in various settings under various conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization of the vascular systems of organs or of small animals is important for an assessment of basic physiological conditions, especially in studies that involve genetically manipulated mice. For a detailed morphological analysis of the vascular tree, it is necessary to demonstrate the system in its entirety. In this study, we present a new lipophilic contrast agent, Angiofil, for performing postmortem microangiography by using microcomputed tomography. The new contrast agent was tested in 10 wild-type mice. Imaging of the vascular system revealed vessels down to the caliber of capillaries, and the digital three-dimensional data obtained from the scans allowed for virtual cutting, amplification, and scaling without destroying the sample. By use of computer software, parameters such as vessel length and caliber could be quantified and remapped by color coding onto the surface of the vascular system. The liquid Angiofil is easy to handle and highly radio-opaque. Because of its lipophilic abilities, it is retained intravascularly, hence it facilitates virtual vessel segmentation, and yields an enduring signal which is advantageous during repetitive investigations, or if samples need to be transported from the site of preparation to the place of actual analysis, respectively. These characteristics make Angiofil a promising novel contrast agent; when combined with microcomputed tomography, it has the potential to turn into a powerful method for rapid vascular phenotyping.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on measurement of blood concentrations. Maintaining concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. In the last decades computer programs have been developed to assist clinicians in this assignment. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Method: Literature and Internet search was performed to identify software. All programs were tested on common personal computer. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software's characteristics. Numbers of drugs handled vary widely and 8 programs offer the ability to the user to add its own drug model. 10 computer programs are able to compute Bayesian dosage adaptation based on a blood concentration (a posteriori adjustment) while 9 are also able to suggest a priori dosage regimen (prior to any blood concentration measurement), based on individual patient covariates, such as age, gender, weight. Among those applying Bayesian analysis, one uses the non-parametric approach. The top 2 software emerging from this benchmark are MwPharm and TCIWorks. Other programs evaluated have also a good potential but are less sophisticated (e.g. in terms of storage or report generation) or less user-friendly.¦Conclusion: Whereas 2 integrated programs are at the top of the ranked listed, such complex tools would possibly not fit all institutions, and each software tool must be regarded with respect to individual needs of hospitals or clinicians. Interest in computing tool to support therapeutic monitoring is still growing. Although developers put efforts into it the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capacity of data storage and report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: Therapeutic drug monitoring (TDM) aims at optimizing treatment by individualizing dosage regimen based on blood concentrations measurement. Maintaining concentrations within a target range requires pharmacokinetic (PK) and clinical capabilities. Bayesian calculation represents a gold standard in TDM approach but requires computing assistance. The aim of this benchmarking was to assess and compare computer tools designed to support TDM clinical activities.¦Methods: Literature and Internet were searched to identify software. Each program was scored against a standardized grid covering pharmacokinetic relevance, user-friendliness, computing aspects, interfacing, and storage. A weighting factor was applied to each criterion of the grid to consider its relative importance. To assess the robustness of the software, six representative clinical vignettes were also processed through all of them.¦Results: 12 software tools were identified, tested and ranked. It represents a comprehensive review of the available software characteristics. Numbers of drugs handled vary from 2 to more than 180, and integration of different population types is available for some programs. Nevertheless, 8 programs offer the ability to add new drug models based on population PK data. 10 computer tools incorporate Bayesian computation to predict dosage regimen (individual parameters are calculated based on population PK models). All of them are able to compute Bayesian a posteriori dosage adaptation based on a blood concentration while 9 are also able to suggest a priori dosage regimen, only based on individual patient covariates. Among those applying Bayesian analysis, MM-USC*PACK uses a non-parametric approach. The top 2 programs emerging from this benchmark are MwPharm and TCIWorks. Others programs evaluated have also a good potential but are less sophisticated or less user-friendly.¦Conclusions: Whereas 2 software packages are ranked at the top of the list, such complex tools would possibly not fit all institutions, and each program must be regarded with respect to individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Although interest in TDM tools is growing and efforts were put into it in the last years, there is still room for improvement, especially in terms of institutional information system interfacing, user-friendliness, capability of data storage and automated report generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: The purpose of this study was to assess the effectiveness of a novel radiation-independent aiming device for distal locking of intramedullary nails in a human cadaver model. METHODS: A new targeting system was used in 25 intact human cadaver femora for the distal locking procedure after insertion of an intramedullary nail. The number of successful screw placements and the time needed for this locking procedure were recorded. The accuracy of the aiming process was evaluated by computed tomography. RESULTS: The duration of the distal locking process was 8.0 ± 1.8 minutes (mean ± SD; range, 4-11 minutes). None of the screw placements required fluoroscopic guidance. Computed tomography revealed high accuracy of the locking process. The incidence angle (α) of the locking screws through the distal locking holes of the nail was 86.8° ± 5.0° (mean ± SD; range, 80°-96°). Targeting failed in 1 static locking screw because of a material defect in the drilling sleeve. CONCLUSIONS: This cadaver study indicated that an aiming arm-based targeting device is highly reliable and accurate. The promising results suggest that it will help to decrease radiation exposure compared with the traditional "free-hand technique."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MRI has evolved into an important diagnostic technique in medical imaging. However, reliability of the derived diagnosis can be degraded by artifacts, which challenge both radiologists and automatic computer-aided diagnosis. This work proposes a fully-automatic method for measuring image quality of three-dimensional (3D) structural MRI. Quality measures are derived by analyzing the air background of magnitude images and are capable of detecting image degradation from several sources, including bulk motion, residual magnetization from incomplete spoiling, blurring, and ghosting. The method has been validated on 749 3D T(1)-weighted 1.5T and 3T head scans acquired at 36 Alzheimer's Disease Neuroimaging Initiative (ADNI) study sites operating with various software and hardware combinations. Results are compared against qualitative grades assigned by the ADNI quality control center (taken as the reference standard). The derived quality indices are independent of the MRI system used and agree with the reference standard quality ratings with high sensitivity and specificity (>85%). The proposed procedures for quality assessment could be of great value for both research and routine clinical imaging. It could greatly improve workflow through its ability to rule out the need for a repeat scan while the patient is still in the magnet bore.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer-Aided Tomography Angiography (CTA) images are the standard for assessing Peripheral artery disease (PAD). This paper presents a Computer Aided Detection (CAD) and Computer Aided Measurement (CAM) system for PAD. The CAD stage detects the arterial network using a 3D region growing method and a fast 3D morphology operation. The CAM stage aims to accurately measure the artery diameters from the detected vessel centerline, compensating for the partial volume effect using Expectation Maximization (EM) and a Markov Random field (MRF). The system has been evaluated on phantom data and also applied to fifteen (15) CTA datasets, where the detection accuracy of stenosis was 88% and the measurement accuracy was with an 8% error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIMS: Surgical ablation procedures for treating atrial fibrillation have been shown to be highly successful. However, the ideal ablation pattern still remains to be determined. This article reports on a systematic study of the effectiveness of the performance of different ablation line patterns. METHODS AND RESULTS: This study of ablation line patterns was performed in a biophysical model of human atria by combining basic lines: (i) in the right atrium: isthmus line, line between vena cavae and appendage line and (ii) in the left atrium: several versions of pulmonary vein isolation, connection of pulmonary veins, isthmus line, and appendage line. Success rates and the presence of residual atrial flutter were documented. Basic patterns yielded conversion rates of only 10-25 and 10-55% in the right and the left atria, respectively. The best result for pulmonary vein isolation was obtained when a single closed line encompassed all veins (55%). Combination of lines in the right/left atrium only led to a success rate of 65/80%. Higher rates, up to 90-100%, could be obtained if right and left lines were combined. The inclusion of a left isthmus line was found to be essential for avoiding uncommon left atrial flutter. CONCLUSION: Some patterns studied achieved a high conversion rate, although using a smaller number of lines than those of the Maze III procedure. The biophysical atrial model is shown to be effective in the search for promising alternative ablation strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to develop an ambulatory system for the three-dimensional (3D) knee kinematics evaluation, which can be used outside a laboratory during long-term monitoring. In order to show the efficacy of this ambulatory system, knee function was analysed using this system, after an anterior cruciate ligament (ACL) lesion, and after reconstructive surgery. The proposed system was composed of two 3D gyroscopes, fixed on the shank and on the thigh, and a portable data logger for signal recording. The measured parameters were the 3D mean range of motion (ROM) and the healthy knee was used as control. The precision of this system was first assessed using an ultrasound reference system. The repeatability was also estimated. A clinical study was then performed on five unilateral ACL-deficient men (range: 19-36 years) prior to, and a year after the surgery. The patients were evaluated with the IKDC score and the kinematics measurements were carried out on a 30 m walking trial. The precision in comparison with the reference system was 4.4 degrees , 2.7 degrees and 4.2 degrees for flexion-extension, internal-external rotation, and abduction-adduction, respectively. The repeatability of the results for the three directions was 0.8 degrees , 0.7 degrees and 1.8 degrees . The averaged ROM of the five patients' healthy knee were 70.1 degrees (standard deviation (SD) 5.8 degrees), 24.0 degrees (SD 3.0 degrees) and 12.0 degrees (SD 6.3 degrees for flexion-extension, internal-external rotation and abduction-adduction before surgery, and 76.5 degrees (SD 4.1 degrees), 21.7 degrees (SD 4.9 degrees) and 10.2 degrees (SD 4.6 degrees) 1 year following the reconstruction. The results for the pathologic knee were 64.5 degrees (SD 6.9 degrees), 20.6 degrees (SD 4.0 degrees) and 19.7 degrees (8.2 degrees) during the first evaluation, and 72.3 degrees (SD 2.4 degrees), 25.8 degrees (SD 6.4 degrees) and 12.4 degrees (SD 2.3 degrees) during the second one. The performance of the system enabled us to detect knee function modifications in the sagittal and transverse plane. Prior to the reconstruction, the ROM of the injured knee was lower in flexion-extension and internal-external rotation in comparison with the controlateral knee. One year after the surgery, four patients were classified normal (A) and one almost normal (B), according to the IKDC score, and changes in the kinematics of the five patients remained: lower flexion-extension ROM and higher internal-external rotation ROM in comparison with the controlateral knee. The 3D kinematics was changed after an ACL lesion and remained altered one year after the surgery

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An assay for the simultaneous analysis of pharmaceutical compounds and their metabolites from micro-whole blood samples (i.e. 5 microL) was developed using an on-line dried blood spot (on-line DBS) device coupled with hydrophilic interaction/reversed-phase (HILIC/RP) LC/MS/MS. Filter paper is directly integrated to the LC device using a homemade inox desorption cell. Without any sample pretreatment, analytes are desorbed from the paper towards an automated system of valves linking a zwitterionic-HILIC column to an RP C18 column. In the same run, the polar fraction is separated by the zwitterionic-HILIC column while the non-polar fraction is eluted on the RP C18. Both fractions are detected by IT-MS operating in full scan mode for the survey scan and in product ion mode for the dependant scan using an ESI source. The procedure was evaluated by the simultaneous qualitative analysis of four probes and their relative phase I and II metabolites spiked in whole blood. In addition, the method was successfully applied to the in vivo monitoring of buprenorphine metabolism after the administration of an intraperitoneal injection of 30 mg/kg on adult female Wistar rat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three standard radiation qualities (RQA 3, RQA 5 and RQA 9) and two screens, Kodak Lanex Regular and Insight Skeletal, were used to compare the imaging performance and dose requirements of the new Kodak Hyper Speed G and the current Kodak T-MAT G/RA medical x-ray films. The noise equivalent quanta (NEQ) and detective quantum efficiencies (DQE) of the four screen-film combinations were measured at three gross optical densities and compared with the characteristics for the Kodak CR 9000 system with GP (general purpose) and HR (high resolution) phosphor plates. The new Hyper Speed G film has double the intrinsic sensitivity of the T-MAT G/RA film and a higher contrast in the high optical density range for comparable exposure latitude. By providing both high sensitivity and high spatial resolution, the new film significantly improves the compromise between dose and image quality. As expected, the new film has a higher noise level and a lower signal-to-noise ratio than the standard film, although in the high frequency range this is compensated for by a better resolution, giving better DQE results--especially at high optical density. Both screen-film systems outperform the phosphor plates in terms of MTF and DQE for standard imaging conditions (Regular screen at RQA 5 and RQA 9 beam qualities). At low energy (RQA 3), the CR system has a comparable low-frequency DQE to screen-film systems when used with a fine screen at low and middle optical densities, and a superior low-frequency DQE at high optical density.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The flexibility of different regions of HIV-1 protease was examined by using a database consisting of 73 X-ray structures that differ in terms of sequence, ligands or both. The root-mean-square differences of the backbone for the set of structures were shown to have the same variation with residue number as those obtained from molecular dynamics simulations, normal mode analyses and X-ray B-factors. This supports the idea that observed structural changes provide a measure of the inherent flexibility of the protein, although specific interactions between the protease and the ligand play a secondary role. The results suggest that the potential energy surface of the HIV-1 protease is characterized by many local minima with small energetic differences, some of which are sampled by the different X-ray structures of the HIV-1 protease complexes. Interdomain correlated motions were calculated from the structural fluctuations and the results were also in agreement with molecular dynamics simulations and normal mode analyses. Implications of the results for the drug-resistance engendered by mutations are discussed briefly.