943 resultados para software analysis
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs "radio-hybrid" measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Scientists planning to use underwater stereoscopic image technologies are often faced with numerous problems during the methodological implementations: commercial equipment is too expensive; the setup or calibration is too complex; or the imaging processing (i.e. measuring objects in the stereo-images) is too complicated to be performed without a time-consuming phase of training and evaluation. The present paper addresses some of these problems and describes a workflow for stereoscopic measurements for marine biologists. It also provides instructions on how to assemble an underwater stereo-photographic system with two digital consumer cameras and gives step-by-step guidelines for setting up the hardware. The second part details a software procedure to correct stereo-image pairs for lens distortions, which is especially important when using cameras with non-calibrated optical units. The final part presents a guide to the process of measuring the lengths (or distances) of objects in stereoscopic image pairs. To reveal the applicability and the restrictions of the described systems and to test the effects of different types of camera (a compact camera and an SLR type), experiments were performed to determine the precision and accuracy of two generic stereo-imaging units: a diver-operated system based on two Olympus Mju 1030SW compact cameras and a cable-connected observatory system based on two Canon 1100D SLR cameras. In the simplest setup without any correction for lens distortion, the low-budget Olympus Mju 1030SW system achieved mean accuracy errors (percentage deviation of a measurement from the object's real size) between 10.2 and -7.6% (overall mean value: -0.6%), depending on the size, orientation and distance of the measured object from the camera. With the single lens reflex (SLR) system, very similar values between 10.1% and -3.4% (overall mean value: -1.2%) were observed. Correction of the lens distortion significantly improved the mean accuracy errors of either system. Even more, system precision (spread of the accuracy) improved significantly in both systems. Neither the use of a wide-angle converter nor multiple reassembly of the system had a significant negative effect on the results. The study shows that underwater stereophotography, independent of the system, has a high potential for robust and non-destructive in situ sampling and can be used without prior specialist training.
Resumo:
Abstract not available
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
For a long time, electronic data analysis has been associated with quantitative methods. However, Computer Assisted Qualitative Data Analysis Software (CAQDAS) are increasingly being developed. Although the CAQDAS has been there for decades, very few qualitative health researchers report using it. This may be due to the difficulties that one has to go through to master the software and the misconceptions that are associated with using CAQDAS. While the issue of mastering CAQDAS has received ample attention, little has been done to address the misconceptions associated with CAQDAS. In this paper, the author reflects on his experience of interacting with one of the popular CAQDAS (NVivo) in order to provide evidence-based implications of using the software. The key message is that unlike statistical software, the main function of CAQDAS is not to analyse data but rather to aid the analysis process, which the researcher must always remain in control of. In other words, researchers must equally know that no software can analyse qualitative data. CAQDAS are basically data management packages, which support the researcher during analysis.
Resumo:
To investigate the degree of T2 relaxometry changes over time in groups of patients with familial mesial temporal lobe epilepsy (FMTLE) and asymptomatic relatives. We conducted both cross-sectional and longitudinal analyses of T2 relaxometry with Aftervoxel, an in-house software for medical image visualization. The cross-sectional study included 35 subjects (26 with FMTLE and 9 asymptomatic relatives) and 40 controls; the longitudinal study was composed of 30 subjects (21 with FMTLE and 9 asymptomatic relatives; the mean time interval of MRIs was 4.4 ± 1.5 years) and 16 controls. To increase the size of our groups of patients and relatives, we combined data acquired in 2 scanners (2T and 3T) and obtained z-scores using their respective controls. General linear model on SPSS21® was used for statistical analysis. In the cross-sectional analysis, elevated T2 relaxometry was identified for subjects with seizures and intermediate values for asymptomatic relatives compared to controls. Subjects with MRI signs of hippocampal sclerosis presented elevated T2 relaxometry in the ipsilateral hippocampus, while patients and asymptomatic relatives with normal MRI presented elevated T2 values in the right hippocampus. The longitudinal analysis revealed a significant increase in T2 relaxometry for the ipsilateral hippocampus exclusively in patients with seizures. The longitudinal increase of T2 signal in patients with seizures suggests the existence of an interaction between ongoing seizures and the underlying pathology, causing progressive damage to the hippocampus. The identification of elevated T2 relaxometry in asymptomatic relatives and in patients with normal MRI suggests that genetic factors may be involved in the development of some mild hippocampal abnormalities in FMTLE.
Resumo:
Chronic telogen effluvium (CTE), a poorly understood condition, can be confused with or may be a prodrome to female pattern hair loss (FPHL). The pathogenesis of both is related to follicle cycle shortening and possibly to blood supply changes. To analyze a number of histomorphometric and immunohistochemical findings through vascular endothelial growth factor (VEGF), Ki-67, and CD31 immunostaining in scalp biopsies of 20 patients with CTE, 17 patients with mild FPHL and 9 controls. Ki-67 index and VEGF optical density were analyzed at the follicular outer sheath using ImageJ software. CD31 microvessel density was assessed by a Chalkley grid. Significant follicle miniaturization and higher density of nonanagen follicles were found in FPHL, compared with patients with CTE and controls. Ki-67+ index correlated positively with FPHL histological features. The FPHL group showed the highest VEGF optical density, followed by the CTE and control groups. No differences were found in CD31 microvessel density between the three groups. Histomorphometric results establish CTE as a distinct disorder, separate from FPHL from its outset. Its pathogenic mechanisms are also distinct. These findings support the proposed mechanism of 'immediate telogen release' for CTE, leading to cycle synchronization. For FPHL, accelerated anagen follicular mitotic rates and, thus, higher Ki-67 and VEGF values, would leave less time for differentiation, resulting in hair miniaturization.
Resumo:
Conventional tilted implants are used in oral rehabilitation for heavily absorbed maxilla to avoid bone grafts; however, few research studies evaluate the biomechanical behavior when different angulations of the implants are used. The aim of this study was evaluate, trough photoelastic method, two different angulations and length of the cantilever in fixed implant-supported maxillary complete dentures. Two groups were evaluated: G15 (distal tilted implants 15°) and G35 (distal tilted implants 35°) n = 6. For each model, 2 distal tilted implants (3.5 x 15 mm long cylindrical cone) and 2 parallel tilted implants in the anterior region (3.5 x 10 mm) were installed. Photoelastic models were submitted to three vertical load tests: in the end of cantilever, in the last pillar and in the all pillars at the same time. We obtained the shear stress by Fringes software and found values for total, cervical and apical stress. The quantitative analysis was performed using the Student tests and Mann-Whitney test; p ≥ 0.05. There is no difference between G15 and G35 for total stress regardless of load type. Analyzing the apical region, G35 reduced strain values considering the distal loads (in the cantilever p = 0.03 and in the last pillar p = 0.02), without increasing the stress level in the cervical region. Considering the load in all pillars, G35 showed higher stress concentration in the cervical region (p = 0.04). For distal loads, G15 showed increase of tension in the apical region, while for load in all pillars, G35 inclination increases stress values in the cervical region.
Resumo:
To determine the most adequate number and size of tissue microarray (TMA) cores for pleomorphic adenoma immunohistochemical studies. Eighty-two pleomorphic adenoma cases were distributed in 3 TMA blocks assembled in triplicate containing 1.0-, 2.0-, and 3.0-mm cores. Immunohistochemical analysis against cytokeratin 7, Ki67, p63, and CD34 were performed and subsequently evaluated with PixelCount, nuclear, and microvessel software applications. The 1.0-mm TMA presented lower results than 2.0- and 3.0-mm TMAs versus conventional whole section slides. Possibly because of an increased amount of stromal tissue, 3.0-mm cores presented a higher microvessel density. Comparing the results obtained with one, two, and three 2.0-mm cores, there was no difference between triplicate or duplicate TMAs and a single-core TMA. Considering the possible loss of cylinders during immunohistochemical reactions, 2.0-mm TMAs in duplicate are a more reliable approach for pleomorphic adenoma immunohistochemical study.
Resumo:
Chemical cross-linking has emerged as a powerful approach for the structural characterization of proteins and protein complexes. However, the correct identification of covalently linked (cross-linked or XL) peptides analyzed by tandem mass spectrometry is still an open challenge. Here we present SIM-XL, a software tool that can analyze data generated through commonly used cross-linkers (e.g., BS3/DSS). Our software introduces a new paradigm for search-space reduction, which ultimately accounts for its increase in speed and sensitivity. Moreover, our search engine is the first to capitalize on reporter ions for selecting tandem mass spectra derived from cross-linked peptides. It also makes available a 2D interaction map and a spectrum-annotation tool unmatched by any of its kind. We show SIM-XL to be more sensitive and faster than a competing tool when analyzing a data set obtained from the human HSP90. The software is freely available for academic use at http://patternlabforproteomics.org/sim-xl. A video demonstrating the tool is available at http://patternlabforproteomics.org/sim-xl/video. SIM-XL is the first tool to support XL data in the mzIdentML format; all data are thus available from the ProteomeXchange consortium (identifier PXD001677).
Resumo:
This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
This study investigated the influence of cervical preflaring with different rotary instruments on determination of the initial apical file (IAF) in mesiobuccal roots of mandibular molars. Fifty human mandibular molars whose mesial roots presented two clearly separated apical foramens (mesiobuccal and mesiolingual) were used. After standard access opening and removal of pulp tissue, the working length (WL) was determined at 1 mm short of the root apex. Five groups (n=10) were formed at random, according to the type of instrument used for cervical preflaring. In group 1, the size of the IAF was determined without preflaring of the cervical and middle root canal thirds. In groups 2 to 5, preflaring was performed with Gates-Glidden drills, ProTaper instruments, EndoFlare instruments and LA Axxes burs, respectively. Canals were sized manually with K-files, starting with size 08 K-files, inserted passively up to the WL. File sizes were increased until a binding sensation was felt at the WL and the size of the file was recorded. The instrument corresponding to the IAF was fixed into the canal at the WL with methylcyanoacrylate. The teeth were then sectioned transversally 1 mm short of the apex, with the IAF in position. Cross-sections of the WL region were examined under scanning electron microscopy and the discrepancies between canal diameter and the diameter of IAF were calculated using the tool "rule" (FEG) of the microscope's proprietary software. The measurements (µm) were analyzed statistically by Kruskal-Wallis and Dunn's tests at 5% significance level. There were statistically significant differences among the groups (p<0.05). The non-flared group had the greatest discrepancy (125.30 ± 51.54) and differed significantly from all flared groups (p<0.05). Cervical preflaring with LA Axxess burs produced the least discrepancies (55.10 ± 48.31), followed by EndoFlare instruments (68.20 ± 42.44), Gattes Glidden drills (68.90 ± 42.46) and ProTaper files (77.40 ± 73.19). However, no significant differences (p>0.05) were found among the rotary instruments. In conclusion, cervical preflaring improved IAF fitting to the canals at the WL in mesiobuccal roots of maxillary first molars. The rotary instruments evaluated in this study did not differ from each other regarding the discrepancies produced between the IAF size and canal diameter at the WL.