983 resultados para Software clones Detection
Resumo:
HYPOTHESIS Facial nerve monitoring can be used synchronous with a high-precision robotic tool as a functional warning to prevent of a collision of the drill bit with the facial nerve during direct cochlear access (DCA). BACKGROUND Minimally invasive direct cochlear access (DCA) aims to eliminate the need for a mastoidectomy by drilling a small tunnel through the facial recess to the cochlea with the aid of stereotactic tool guidance. Because the procedure is performed in a blind manner, structures such as the facial nerve are at risk. Neuromonitoring is a commonly used tool to help surgeons identify the facial nerve (FN) during routine surgical procedures in the mastoid. Recently, neuromonitoring technology was integrated into a commercially available drill system enabling real-time monitoring of the FN. The objective of this study was to determine if this drilling system could be used to warn of an impending collision with the FN during robot-assisted DCA. MATERIALS AND METHODS The sheep was chosen as a suitable model for this study because of its similarity to the human ear anatomy. The same surgical workflow applicable to human patients was performed in the animal model. Bone screws, serving as reference fiducials, were placed in the skull near the ear canal. The sheep head was imaged using a computed tomographic scanner and segmentation of FN, mastoid, and other relevant structures as well as planning of drilling trajectories was carried out using a dedicated software tool. During the actual procedure, a surgical drill system was connected to a nerve monitor and guided by a custom built robot system. As the planned trajectories were drilled, stimulation and EMG response signals were recorded. A postoperative analysis was achieved after each surgery to determine the actual drilled positions. RESULTS Using the calibrated pose synchronized with the EMG signals, the precise relationship between distance to FN and EMG with 3 different stimulation intensities could be determined for 11 different tunnels drilled in 3 different subjects. CONCLUSION From the results, it was determined that the current implementation of the neuromonitoring system lacks sensitivity and repeatability necessary to be used as a warning device in robotic DCA. We hypothesize that this is primarily because of the stimulation pattern achieved using a noninsulated drill as a stimulating probe. Further work is necessary to determine whether specific changes to the design can improve the sensitivity and specificity.
Resumo:
OBJECTIVES The aim of this phantom study was to minimize the radiation dose by finding the best combination of low tube current and low voltage that would result in accurate volume measurements when compared to standard CT imaging without significantly decreasing the sensitivity of detecting lung nodules both with and without the assistance of CAD. METHODS An anthropomorphic chest phantom containing artificial solid and ground glass nodules (GGNs, 5-12 mm) was examined with a 64-row multi-detector CT scanner with three tube currents of 100, 50 and 25 mAs in combination with three tube voltages of 120, 100 and 80 kVp. This resulted in eight different protocols that were then compared to standard CT sensitivity (100 mAs/120 kVp). For each protocol, at least 127 different nodules were scanned in 21-25 phantoms. The nodules were analyzed in two separate sessions by three independent, blinded radiologists and computer-aided detection (CAD) software. RESULTS The mean sensitivity of the radiologists for identifying solid lung nodules on a standard CT was 89.7% ± 4.9%. The sensitivity was not significantly impaired when the tube and current voltage were lowered at the same time, except at the lowest exposure level of 25 mAs/80 kVp [80.6% ± 4.3% (p = 0.031)]. Compared to the standard CT, the sensitivity for detecting GGNs was significantly lower at all dose levels when the voltage was 80 kVp; this result was independent of the tube current. The CAD significantly increased the radiologists' sensitivity for detecting solid nodules at all dose levels (5-11%). No significant volume measurement errors (VMEs) were documented for the radiologists or the CAD software at any dose level. CONCLUSIONS Our results suggest a CT protocol with 25 mAs and 100 kVp is optimal for detecting solid and ground glass nodules in lung cancer screening. The use of CAD software is highly recommended at all dose levels.
Resumo:
Polymorphism, along with inheritance, is one of the most important features in object-oriented languages, but it is also one of the biggest obstacles to source code comprehension. Depending on the run-time type of the receiver of a message, any one of a number of possible methods may be invoked. Several algorithms for creating accurate call-graphs using static analysis already exist, however, they consume significant time and memory resources. We propose an approach that will combine static and dynamic analysis and yield the best possible precision with a minimal trade-off between used resources and accuracy.
Resumo:
Code clone detection helps connect developers across projects, if we do it on a large scale. The cornerstones that allow clone detection to work on a large scale are: (1) bad hashing (2) lightweight parsing using regular expressions and (3) MapReduce pipelines. Bad hashing means to determine whether or not two artifacts are similar by checking whether their hashes are identical. We show a bad hashing scheme that works well on source code. Lightweight parsing using regular expressions is our technique of obtaining entire parse trees from regular expressions, robustly and efficiently. We detail the algorithm and implementation of one such regular expression engine. MapReduce pipelines are a way of expressing a computation such that it can automatically and simply be parallelized. We detail the design and implementation of one such MapReduce pipeline that is efficient and debuggable. We show a clone detector that combines these cornerstones to detect code clones across all projects, across all versions of each project.
Resumo:
Con el objeto de ajustar un método de evaluación a campo para facilitar la selección de clones de ajo (Allium sativum L.) con resistencia a Penicillium allii, se realizó un ensayo replicado en el cual se midió la respuesta de tres clones experimentales a la infección, en dos épocas de plantación. Los tratamientos incluyeron combinaciones de presencia o ausencia de: a) desinfección de la "semilla", b) heridas artificiales en los bulbillos, c) inoculación artificial. Los bulbillos con la hoja envolvente se desinfectaron con una solución diluida de hipoclorito de sodio. La inoculación se realizó mediante la inmersión en una solución de 106 esporas•ml-1. Las heridas se realizaron con una lanceta (blood lancet). Los bulbillos se incubaron en cámara húmeda durante 24 horas a 20 °C, antes de plantación. Las variables respuestas computadas fueron las siguientes: número de plantas muertas, y vivas con síntomas y sin síntomas, a los 152 días de plantación en la primera época y 118 días en la segunda; número de bulbos por calibres y rendimiento a cosecha. La información obtenida se analizó a través de correspondencia simple, diferencia de proporciones y análisis paramétricos. Se detectaron mayores diferencias entre los tratamientos en la tasa de sobrevivencia y en la proporción de calibres comerciales en la época tardía. No se observaron diferencias (p < 0,05) del rendimiento en la plantación temprana. La herida fue determinante en el ingreso del patógeno. Los tratamientos quedaron limitados a un testigo: desinfectado, no inoculado y sin herida, y a un tratamiento: desinfectado, con inoculación y herida. Con respecto a la época, resultó conveniente la evaluación en plantaciones tardías porque se expresan mejor las diferencias entre los tratamientos.
Resumo:
PURPOSE The purpose of this study was to classify and detect intraretinal hemorrhage (IRH) in spectral domain optical coherence tomography (SD-OCT). METHODS Initially the presentation of IRH in BRVO-patients in SD-OCT was described by one reader comparing color-fundus (CF) and SD-OCT using dedicated software. Based on these established characteristics, the presence and the severity of IRH in SD-OCT and CF were assessed by two other masked readers and the inter-device and the inter-observer agreement were evaluated. Further the area of IRH was compared. RESULTS About 895 single B-scans of 24 eyes were analyzed. About 61% of SD-OCT scans and 46% of the CF-images were graded for the presence of IRH (concordance: 73%, inter-device agreement: k = 0.5). However, subdivided into previously established severity levels of dense (CF: 21.3% versus SD-OCT: 34.7%, k = 0.2), flame-like (CF: 15.5% versus SD-OCT: 45.5%, k = 0.3), and dot-like (CF: 32% versus SD-OCT: 24.4%, k = 0.2) IRH, the inter-device agreement was weak. The inter-observer agreement was strong with k = 0.9 for SD-OCT and k = 0.8 for CF. The mean area of IRH detected on SD-OCT was significantly greater than on CF (SD-OCT: 11.5 ± 4.3 mm(2) versus CF: 8.1 ± 5.5 mm(2), p = 0.008). CONCLUSIONS IRH seems to be detectable on SD-OCT; however, the previously established severity grading agreed weakly with that assessed by CF.
Resumo:
All meta-analyses should include a heterogeneity analysis. Even so, it is not easy to decide whether a set of studies are homogeneous or heterogeneous because of the low statistical power of the statistics used (usually the Q test). Objective: Determine a set of rules enabling SE researchers to find out, based on the characteristics of the experiments to be aggregated, whether or not it is feasible to accurately detect heterogeneity. Method: Evaluate the statistical power of heterogeneity detection methods using a Monte Carlo simulation process. Results: The Q test is not powerful when the meta-analysis contains up to a total of about 200 experimental subjects and the effect size difference is less than 1. Conclusions: The Q test cannot be used as a decision-making criterion for meta-analysis in small sample settings like SE. Random effects models should be used instead of fixed effects models. Caution should be exercised when applying Q test-mediated decomposition into subgroups.
Resumo:
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead
Resumo:
We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties. The implemented transformation includes several optimizations to reduce run-time overhead. We also propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions. This language can express for example the input data for performing the unit tests or the number of times that the unit tests should be repeated. We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO-prolog compliance and to the detection of different types of bugs in the Ciao system source code. Several experimental results are presented that ¡Ilústrate different trade-offs among program size, running time, or levéis of verbosity of the messages shown to the user.
Resumo:
We have designed and implemented a framework that unifies unit testing and run-time verification (as well as static verification and static debugging). A key contribution of our approach is that a unified assertion language is used for all of these tasks. We first propose methods for compiling runtime checks for (parts of) assertions which cannot be verified at compile-time via program transformation. This transformation allows checking preconditions and postconditions, including conditional postconditions, properties at arbitrary program points, and certain computational properties. The implemented transformation includes several optimizations to reduce run-time overhead. We also propose a minimal addition to the assertion language which allows defining unit tests to be run in order to detect possible violations of the (partial) specifications expressed by the assertions. This language can express for example the input data for performing the unit tests or the number of times that the unit tests should be repeated. We have implemented the framework within the Ciao/CiaoPP system and effectively applied it to the verification of ISO-prolog compliance and to the detection of different types of bugs in the Ciao system source code. Several experimental results are presented that illustrate different trade-offs among program size, running time, or levels of verbosity of the messages shown to the user.
Resumo:
Sensing systems in living bodies offer a large variety of possible different configurations and philosophies able to be emulated in artificial sensing systems. Motion detection is one of the areas where different animals adopt different solutions and, in most of the cases, these solutions reflect a very sophisticated form. One of them, the mammalian visual system, presents several advantages with respect to the artificial ones. The main objective of this paper is to present a system, based on this biological structure, able to detect motion, its sense and its characteristics. The configuration adopted responds to the internal structure of the mammalian retina, where just five types of cells arranged in five layers are able to differentiate a large number of characteristics of the image impinging onto it. Its main advantage is that the detection of these properties is based purely on its hardware. A simple unit, based in a previous optical logic cell employed in optical computing, is the basis for emulating the different behaviors of the biological neurons. No software is present and, in this way, no possible interference from outside affects to the final behavior. This type of structure is able to work, once the internal configuration is implemented, without any further attention. Different possibilities are present in the architecture to be presented: detection of motion, of its direction and intensity. Moreover, some other characteristics, as symmetry may be obtained.
Resumo:
An accepted fact in software engineering is that software must undergo verification and validation process during development to ascertain and improve its quality level. But there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products. Though, some knowledge is available on the strengths and weaknesses of the available software quality assurance techniques but not much is known yet on the relationship between different techniques and contextual behavior of the techniques. Objective: This research investigates the effectiveness of two testing techniques ? equivalence class partitioning and decision coverage and one review technique ? code review by abstraction, in terms of their fault detection capability. This will be used to strengthen the practical knowledge available on these techniques.
Resumo:
The increasing use of video editing software requires faster and more efficient editing tools. As a first step, these tools perform a temporal segmentation in shots that allows a later building of indexes describing the video content. Here, we propose a novel real-time high-quality shot detection strategy, suitable for the last generation of video editing software requiring both low computational cost and high quality results. While abrupt transitions are detected through a very fast pixel-based analysis, gradual transitions are obtained from an efficient edge-based analysis. Both analyses are reinforced with a motion analysis that helps to detect and discard false detections. This motion analysis is carried out exclusively over a reduced set of candidate transitions, thus maintaining the computational requirements demanded by new applications to fulfill user needs.
Resumo:
This paper describes the experimental set up of a system composed by a set of wearable sensors devices for the recording of the motion signals and software algorithms for the signal analysis. This system is able to automatically detect and assess the severity of bradykinesia, tremor, dyskinesia and akinesia motor symptoms. Based on the assessment of the akinesia, the ON-OFF status of the patient is determined for each moment. The assessment performed through the automatic evaluation of the akinesia is compared with the status reported by the patients in their diaries. Preliminary results with a total recording period of 32 hours with two PD patients are presented, where a good correspondence (88.2 +/- 3.7 %) was observed. Best (93.7 por ciento) and worst (87 por ciento) correlation results are illustrated, together with the analysis of the automatic assessment of the akinesia symptom leading to the status determination. The results obtained are promising, and if confirmed with further data, this automatic assessment of PD motor symptoms will lead to a better adjustment of medication dosages and timing, cost savings and an improved quality of life of the patients.