63 resultados para Analysis software
Resumo:
OBJECTIVES: To determine the accuracy of automated vessel-segmentation software for vessel-diameter measurements based on three-dimensional contrast-enhanced magnetic resonance angiography (3D-MRA). METHOD: In 10 patients with high-grade carotid stenosis, automated measurements of both carotid arteries were obtained with 3D-MRA by two independent investigators and compared with manual measurements obtained by digital subtraction angiography (DSA) and 2D maximum-intensity projection (2D-MIP) based on MRA and duplex ultrasonography (US). In 42 patients undergoing carotid endarterectomy (CEA), intraoperative measurements (IOP) were compared with postoperative 3D-MRA and US. RESULTS: Mean interoperator variability was 8% for measurements by DSA and 11% by 2D-MIP, but there was no interoperator variability with the automated 3D-MRA analysis. Good correlations were found between DSA (standard of reference), manual 2D-MIP (rP=0.6) and automated 3D-MRA (rP=0.8). Excellent correlations were found between IOP, 3D-MRA (rP=0.93) and US (rP=0.83). CONCLUSION: Automated 3D-MRA-based vessel segmentation and quantification result in accurate measurements of extracerebral-vessel dimensions.
Resumo:
OBJECTIVE: Meta-analysis of studies of the accuracy of diagnostic tests currently uses a variety of methods. Statistically rigorous hierarchical models require expertise and sophisticated software. We assessed whether any of the simpler methods can in practice give adequately accurate and reliable results. STUDY DESIGN AND SETTING: We reviewed six methods for meta-analysis of diagnostic accuracy: four simple commonly used methods (simple pooling, separate random-effects meta-analyses of sensitivity and specificity, separate meta-analyses of positive and negative likelihood ratios, and the Littenberg-Moses summary receiver operating characteristic [ROC] curve) and two more statistically rigorous approaches using hierarchical models (bivariate random-effects meta-analysis and hierarchical summary ROC curve analysis). We applied the methods to data from a sample of eight systematic reviews chosen to illustrate a variety of patterns of results. RESULTS: In each meta-analysis, there was substantial heterogeneity between the results of different studies. Simple pooling of results gave misleading summary estimates of sensitivity and specificity in some meta-analyses, and the Littenberg-Moses method produced summary ROC curves that diverged from those produced by more rigorous methods in some situations. CONCLUSION: The closely related hierarchical summary ROC curve or bivariate models should be used as the standard method for meta-analysis of diagnostic accuracy.
Resumo:
Advances in spinal cord injury (SCI) research are dependent on quality animal models, which in turn rely on sensitive outcome measures able to detect functional differences in animals following injury. To date, most measurements of dysfunction following SCI rely either on the subjective rating of observers or the slow throughput of manual gait assessment. The present study compares the gait of normal and contusion-injured mice using the TreadScan system. TreadScan utilizes a transparent treadmill belt and a high-speed camera to capture the footprints of animals and automatically analyze gait characteristics. Adult female C57Bl/6 mice were introduced to the treadmill prior to receiving either a standardized mild, moderate, or sham contusion spinal cord injury. TreadScan gait analyses were performed weekly for 10 weeks and compared with scores on the Basso Mouse Scale (BMS). Results indicate that this software successfully differentiates sham animals from injured animals on a number of gait characteristics, including hindlimb swing time, stride length, toe spread, and track width. Differences were found between mild and moderate contusion injuries, indicating a high degree of sensitivity within the system. Rear track width, a measure of the animal's hindlimb base of support, correlated strongly both with spared white matter percentage and with terminal BMS. TreadScan allows for an objective and rapid behavioral assessment of locomotor function following mild-moderate contusive SCI, where the majority of mice still exhibit hindlimb weight support and plantar paw placement during stepping.
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. Moose is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. Moose accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from Moose and FAMIX as shared infrastructure.
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. MOOSE is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. MOOSE accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from MOOSE and FAMIX as shared infrastructure.
Resumo:
In rapidly evolving domains such as Computer Assisted Orthopaedic Surgery (CAOS) emphasis is often put first on innovation and new functionality, rather than in developing the common infrastructure needed to support integration and reuse of these innovations. In fact, developing such an infrastructure is often considered to be a high-risk venture given the volatility of such a domain. We present CompAS, a method that exploits the very evolution of innovations in the domain to carry out the necessary quantitative and qualitative commonality and variability analysis, especially in the case of scarce system documentation. We show how our technique applies to the CAOS domain by using conference proceedings as a key source of information about the evolution of features in CAOS systems over a period of several years. We detect and classify evolution patterns to determine functional commonality and variability. We also identify non-functional requirements to help capture domain variability. We have validated our approach by evaluating the degree to which representative test systems can be covered by the common and variable features produced by our analysis.
Resumo:
Background Balkan endemic nephropathy (BEN) represents a chronic progressive interstitial nephritis in striking correlation with uroepithelial tumours of the upper urinary tract. The disease has endemic distribution in the Danube river regions in several Balkan countries. DNA methylation is a primary epigenetic modification that is involved in major processes such as cancer, genomic imprinting, gene silencing, etc. The significance of CpG island methylation status in normal development, cell differentiation and gene expression is widely recognized, although still stays poorly understood. Methods We performed whole genome DNA methylation array analysis on DNA pool samples from peripheral blood from 159 affected individuals and 170 healthy individuals. This technique allowed us to determine the methylation status of 27 627 CpG islands throughout the whole genome in healthy controls and BEN patients. Thus we obtained the methylation profile of BEN patients from Bulgarian and Serbian endemic regions. Results Using specifically developed software we compared the methylation profiles of BEN patients and corresponding controls and revealed the differently methylated regions. We then compared the DMRs between all patient-control pairs to determine common changes in the epigenetic profiles. SEC61G, IL17RA, HDAC11 proved to be differently methylated throughout all patient-control pairs. The CpG islands of all 3 genes were hypomethylated compared to controls. This suggests that dysregulation of these genes involved in immunological response could be a common mechanism in BEN pathogenesis in both endemic regions and in both genders. Conclusion Our data propose a new hypothesis that immunologic dysregulation has a place in BEN etiopathogenesis. Keywords: Epigenetics; Whole genome array analysis; Balkan endemic nephropathy
Resumo:
Coordinated eye and head movements simultaneously occur to scan the visual world for relevant targets. However, measuring both eye and head movements in experiments allowing natural head movements may be challenging. This paper provides an approach to study eye-head coordination: First, we demonstra- te the capabilities and limits of the eye-head tracking system used, and compare it to other technologies. Second, a beha- vioral task is introduced to invoke eye-head coordination. Third, a method is introduced to reconstruct signal loss in video- based oculography caused by cornea reflection artifacts in order to extend the tracking range. Finally, parameters of eye- head coordination are identified using EHCA (eye-head co- ordination analyzer), a MATLAB software which was developed to analyze eye-head shifts. To demonstrate the capabilities of the approach, a study with 11 healthy subjects was performed to investigate motion behavior. The approach presented here is discussed as an instrument to explore eye-head coordination, which may lead to further insights into attentional and motor symptoms of certain neurological or psychiatric diseases, e.g., schizophrenia.
Resumo:
The platform-independent software package consisting of the oligonucleotide mass assembler (OMA) and the oligonucleotide peak analyzer (OPA) was created to support the analysis of oligonucleotide mass spectra. It calculates all theoretically possible fragments of a given input sequence and annotates it to an experimental spectrum, thus, saving a large amount of manual processing time. The software performs analysis of precursor and product ion spectra of oligonucleotides and their analogues comprising user-defined modifications of the backbone, the nucleobases, or the sugar moiety, as well as adducts with metal ions or drugs. The ability to expand the library of building blocks and to implement individual structural variations makes it extremely useful for supporting the analysis of therapeutically active compounds. The functionality of the software tool is demonstrated on the examples of a platinated doublestranded oligonucleotide and a modified RNA sequence. Experiments also reveal the unique dissociation behavior of platinated higher-order DNA structures.
Resumo:
Given the increasing interest in using social software for company-internal communication and collaboration, this paper examines drivers and inhibitors of micro-blogging adoption at the workplace. While nearly one in two companies is currently planning to introduce social software, there is no empirically validated research on employees’ adoption. In this paper, we build on previous focus group results and test our research model in an empirical study using Structural Equation Modeling. Based on our findings, we derive recommendations on how to foster adoption. We suggest that micro-blogging should be presented to employees as an efficient means of communication, personal brand building, and knowledge management. In order to particularly promote content contribution, privacy concerns should be eased by setting clear rules on who has access to postings and for how long they will be archived.
Resumo:
This work contributes to the ongoing debate on the productivity paradox by considering CIOs’ perceptions of IT business value. Applying regression analysis to data from an international survey, we study how the adoption of certain types of enterprise software affects the CIOs’ perception of the impact of IT on the firm’s business activities and vice versa. Other potentially important factors such as country, sector and size of the firms are also taken into account. Our results indicate a more significant support for the impact of perceived IT benefits on adoption of enterprise software than vice versa. CIOs based in the US perceive IT benefits more strongly than their German counterparts. Furthermore, certain types of enterprise software seem to be more prevalent in the US.
Resumo:
IT has turned out to be a key factor for the purposes of gaining maturity in Business Process Management (BPM). This book presents a worldwide investigation that was conducted among companies from the ‘Forbes Global 2000’ list to explore the current usage of software throughout the BPM life cycle and to identify the companies’ requirements concerning process modelling. The responses from 130 companies indicate that, at the present time, it is mainly software for process description and analysis that is required, while process execution is supported by general software such as databases, ERP systems and office tools. The resulting complex system landscapes give rise to distinct requirements for BPM software, while the process modelling requirements can be equally satisfied by the most common languages (BPMN, UML, EPC).
Resumo:
We present results of a benchmark test evaluating the resource allocation capabilities of the project management software packages Acos Plus.1 8.2, CA SuperProject 5.0a, CS Project Professional 3.0, MS Project 2000, and Scitor Project Scheduler 8.0.1. The tests are based on 1560 instances of precedence– and resource–constrained project scheduling problems. For different complexity scenarios, we analyze the deviation of the makespan obtained by the software packages from the best feasible makespan known. Among the tested software packages, Acos Plus.1 and Project Scheduler show the best resource allocation performance. Moreover, our numerical analysis reveals a considerable performance gap between the implemented methods and state–of–the–art project scheduling algorithms, especially for large–sized problems. Thus, there is still a significant potential for improving solutions to resource allocation problems in practice.