15 resultados para Software clones Detection
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Code duplication is common in current programming-practice: programmers search for snippets of code, incorporate them into their projects and then modify them to their needs. In today's practice, no automated scheme is in place to inform both parties of any distant changes of the code. As code snippets continues to evolve both on the side of the user and on the side of the author, both may wish to benefit from remote bug fixes or refinements --- authors may be interested in the actual usage of their code snippets, and researchers could gather information on clone usage. We propose maintaining a link between software clones across repositories and outline how the links can be created and maintained.
Resumo:
Background Atrial fibrillation (AF) is common and may have severe consequences. Continuous long-term electrocardiogram (ECG) is widely used for AF screening. Recently, commercial ECG analysis software was launched, which automatically detects AF in long-term ECGs. It has been claimed that such tools offer reliable AF screening and save time for ECG analysis. However, this has not been investigated in a real-life patient cohort. Objective To investigate the performance of automatic software-based screening for AF in long-term ECGs. Methods Two independent physicians manually screened 22,601 hours of continuous long-term ECGs from 150 patients for AF. Presence, number, and duration of AF episodes were registered. Subsequently, the recordings were screened for AF by an established ECG analysis software (Pathfinder SL), and its performance was validated against the thorough manual analysis (gold standard). Results Sensitivity and specificity for AF detection was 98.5% (95% confidence interval 91.72%–99.96%) and 80.21% (95% confidence interval 70.83%–87.64%), respectively. Software-based AF detection was inferior to manual analysis by physicians (P < .0001). Median AF duration was underestimated (19.4 hours vs 22.1 hours; P < .001) and median number of AF episodes was overestimated (32 episodes vs 2 episodes; P < .001) by the software. In comparison to extensive quantitative manual ECG analysis, software-based analysis saved time (2 minutes vs 19 minutes; P < .001). Conclusion Owing to its high sensitivity and ability to save time, software-based ECG analysis may be used as a screening tool for AF. An additional manual confirmatory analysis may be required to reduce the number of false-positive findings.
Resumo:
To retrospectively analyze the performance of a commercial computer-aided diagnosis (CAD) software in the detection of pulmonary nodules in original and energy-subtracted (ES) chest radiographs.
Resumo:
OBJECTIVES To find the best pairing of first and second reader at highest sensitivity for detecting lung nodules with CT at various dose levels. MATERIALS AND METHODS An anthropomorphic lung phantom and artificial lung nodules were used to simulate screening CT-examination at standard dose (100 mAs, 120 kVp) and 8 different low dose levels, using 120, 100 and 80 kVp combined with 100, 50 and 25 mAs. At each dose level 40 phantoms were randomly filled with 75 solid and 25 ground glass nodules (5-12 mm). Two radiologists and 3 different computer aided detection softwares (CAD) were paired to find the highest sensitivity. RESULTS Sensitivities at standard dose were 92%, 90%, 84%, 79% and 73% for reader 1, 2, CAD1, CAD2, CAD3, respectively. Combined sensitivity for human readers 1 and 2 improved to 97%, (p1=0.063, p2=0.016). Highest sensitivities--between 97% and 99.0%--were achieved by combining any radiologist with any CAD at any dose level. Combining any two CADs, sensitivities between 85% and 88% were significantly lower than for radiologists combined with CAD (p<0.03). CONCLUSIONS Combination of a human observer with any of the tested CAD systems provide optimal sensitivity for lung nodule detection even at reduced dose at 25 mAs/80 kVp.
Resumo:
We screened a total of 340 veterinarians (including general practitioners, small animal practitioners, large animal practitioners, veterinarians working in different veterinary services or industry), and 29 veterinary assistants for nasal carriage of methicillin-resistant Staphylococcus aureus (MRSA) and Staphylococcus pseudintermedius (MRSP) at the 2012 Swiss veterinary annual meeting. MRSA isolates (n = 14) were detected in 3.8 % (95 % CI 2.1 - 6.3 %) of the participants whereas MRSP was not detected. Large animal practitioners were carriers of livestock-associated MRSA (LA-MRSA) ST398-t011-V (n = 2), ST398-t011-IV (n = 4), and ST398-t034-V (n = 1). On the other hand, participants working with small animals harbored human healthcare-associated MRSA (HCA-MRSA) which belonged to epidemic lineages ST225-t003-II (n = 2), ST225-t014-II (n = 1), ST5-t002-II (n = 2), ST5-t283-IV (n = 1), and ST88-t186-IV (n = 1). HCA-MRSA harbored virulence factors such as enterotoxins, β-hemolysin converting phage and leukocidins. None of the MRSA isolates carried Panton-Valentine leukocidin (PVL). In addition to the methicillin resistance gene mecA, LA-MRSA ST398 isolates generally contained additional antibiotic resistance genes conferring resistance to tetracycline [tet(M) and tet(K)], trimethoprim [dfrK, dfrG], and the aminoglycosides gentamicin and kanamycin [aac(6')-Ie - aph(2')-Ia]. On the other hand, HCA-MRSA ST5 and ST225 mainly contained genes conferring resistance to the macrolide, lincosamide and streptogramin B antibiotics [erm(A)], to spectinomycin [ant(9)-Ia], amikacin and tobramycin [ant(4')-Ia], and to fluoroquinolones [amino acid substitutions in GrlA (S84L) and GyrA (S80F and S81P)]. MRSA carriage may represent an occupational risk and veterinarians should be aware of possible MRSA colonization and potential for developing infection or for transmitting these strains. Professional exposure to animals should be reported upon hospitalization and before medical intervention to allow for preventive measures. Infection prevention measures are also indicated in veterinary medicine to avoid MRSA transmission between humans and animals, and to limit the spread of MRSA both in the community, and to animal and human hospitals.
Resumo:
The multi-target screening method described in this work allows the simultaneous detection and identification of 700 drugs and metabolites in biological fluids using a hybrid triple-quadrupole linear ion trap mass spectrometer in a single analytical run. After standardization of the method, the retention times of 700 compounds were determined and transitions for each compound were selected by a "scheduled" survey MRM scan, followed by an information-dependent acquisition using the sensitive enhanced product ion scan of a Q TRAP hybrid instrument. The identification of the compounds in the samples analyzed was accomplished by searching the tandem mass spectrometry (MS/MS) spectra against the library we developed, which contains electrospray ionization-MS/MS spectra of over 1,250 compounds. The multi-target screening method together with the library was included in a software program for routine screening and quantitation to achieve automated acquisition and library searching. With the help of this software application, the time for evaluation and interpretation of the results could be drastically reduced. This new multi-target screening method has been successfully applied for the analysis of postmortem and traffic offense samples as well as proficiency testing, and complements screening with immunoassays, gas chromatography-mass spectrometry, and liquid chromatography-diode-array detection. Other possible applications are analysis in clinical toxicology (for intoxication cases), in psychiatry (antidepressants and other psychoactive drugs), and in forensic toxicology (drugs and driving, workplace drug testing, oral fluid analysis, drug-facilitated sexual assault).
Resumo:
The objective of our study was to compare the effect of dual-energy subtraction and bone suppression software alone and in combination with computer-aided detection (CAD) on the performance of human observers in lung nodule detection.
Resumo:
HYPOTHESIS Facial nerve monitoring can be used synchronous with a high-precision robotic tool as a functional warning to prevent of a collision of the drill bit with the facial nerve during direct cochlear access (DCA). BACKGROUND Minimally invasive direct cochlear access (DCA) aims to eliminate the need for a mastoidectomy by drilling a small tunnel through the facial recess to the cochlea with the aid of stereotactic tool guidance. Because the procedure is performed in a blind manner, structures such as the facial nerve are at risk. Neuromonitoring is a commonly used tool to help surgeons identify the facial nerve (FN) during routine surgical procedures in the mastoid. Recently, neuromonitoring technology was integrated into a commercially available drill system enabling real-time monitoring of the FN. The objective of this study was to determine if this drilling system could be used to warn of an impending collision with the FN during robot-assisted DCA. MATERIALS AND METHODS The sheep was chosen as a suitable model for this study because of its similarity to the human ear anatomy. The same surgical workflow applicable to human patients was performed in the animal model. Bone screws, serving as reference fiducials, were placed in the skull near the ear canal. The sheep head was imaged using a computed tomographic scanner and segmentation of FN, mastoid, and other relevant structures as well as planning of drilling trajectories was carried out using a dedicated software tool. During the actual procedure, a surgical drill system was connected to a nerve monitor and guided by a custom built robot system. As the planned trajectories were drilled, stimulation and EMG response signals were recorded. A postoperative analysis was achieved after each surgery to determine the actual drilled positions. RESULTS Using the calibrated pose synchronized with the EMG signals, the precise relationship between distance to FN and EMG with 3 different stimulation intensities could be determined for 11 different tunnels drilled in 3 different subjects. CONCLUSION From the results, it was determined that the current implementation of the neuromonitoring system lacks sensitivity and repeatability necessary to be used as a warning device in robotic DCA. We hypothesize that this is primarily because of the stimulation pattern achieved using a noninsulated drill as a stimulating probe. Further work is necessary to determine whether specific changes to the design can improve the sensitivity and specificity.
Resumo:
OBJECTIVES The aim of this phantom study was to minimize the radiation dose by finding the best combination of low tube current and low voltage that would result in accurate volume measurements when compared to standard CT imaging without significantly decreasing the sensitivity of detecting lung nodules both with and without the assistance of CAD. METHODS An anthropomorphic chest phantom containing artificial solid and ground glass nodules (GGNs, 5-12 mm) was examined with a 64-row multi-detector CT scanner with three tube currents of 100, 50 and 25 mAs in combination with three tube voltages of 120, 100 and 80 kVp. This resulted in eight different protocols that were then compared to standard CT sensitivity (100 mAs/120 kVp). For each protocol, at least 127 different nodules were scanned in 21-25 phantoms. The nodules were analyzed in two separate sessions by three independent, blinded radiologists and computer-aided detection (CAD) software. RESULTS The mean sensitivity of the radiologists for identifying solid lung nodules on a standard CT was 89.7% ± 4.9%. The sensitivity was not significantly impaired when the tube and current voltage were lowered at the same time, except at the lowest exposure level of 25 mAs/80 kVp [80.6% ± 4.3% (p = 0.031)]. Compared to the standard CT, the sensitivity for detecting GGNs was significantly lower at all dose levels when the voltage was 80 kVp; this result was independent of the tube current. The CAD significantly increased the radiologists' sensitivity for detecting solid nodules at all dose levels (5-11%). No significant volume measurement errors (VMEs) were documented for the radiologists or the CAD software at any dose level. CONCLUSIONS Our results suggest a CT protocol with 25 mAs and 100 kVp is optimal for detecting solid and ground glass nodules in lung cancer screening. The use of CAD software is highly recommended at all dose levels.
Resumo:
Polymorphism, along with inheritance, is one of the most important features in object-oriented languages, but it is also one of the biggest obstacles to source code comprehension. Depending on the run-time type of the receiver of a message, any one of a number of possible methods may be invoked. Several algorithms for creating accurate call-graphs using static analysis already exist, however, they consume significant time and memory resources. We propose an approach that will combine static and dynamic analysis and yield the best possible precision with a minimal trade-off between used resources and accuracy.
Resumo:
Code clone detection helps connect developers across projects, if we do it on a large scale. The cornerstones that allow clone detection to work on a large scale are: (1) bad hashing (2) lightweight parsing using regular expressions and (3) MapReduce pipelines. Bad hashing means to determine whether or not two artifacts are similar by checking whether their hashes are identical. We show a bad hashing scheme that works well on source code. Lightweight parsing using regular expressions is our technique of obtaining entire parse trees from regular expressions, robustly and efficiently. We detail the algorithm and implementation of one such regular expression engine. MapReduce pipelines are a way of expressing a computation such that it can automatically and simply be parallelized. We detail the design and implementation of one such MapReduce pipeline that is efficient and debuggable. We show a clone detector that combines these cornerstones to detect code clones across all projects, across all versions of each project.
Resumo:
PURPOSE The purpose of this study was to classify and detect intraretinal hemorrhage (IRH) in spectral domain optical coherence tomography (SD-OCT). METHODS Initially the presentation of IRH in BRVO-patients in SD-OCT was described by one reader comparing color-fundus (CF) and SD-OCT using dedicated software. Based on these established characteristics, the presence and the severity of IRH in SD-OCT and CF were assessed by two other masked readers and the inter-device and the inter-observer agreement were evaluated. Further the area of IRH was compared. RESULTS About 895 single B-scans of 24 eyes were analyzed. About 61% of SD-OCT scans and 46% of the CF-images were graded for the presence of IRH (concordance: 73%, inter-device agreement: k = 0.5). However, subdivided into previously established severity levels of dense (CF: 21.3% versus SD-OCT: 34.7%, k = 0.2), flame-like (CF: 15.5% versus SD-OCT: 45.5%, k = 0.3), and dot-like (CF: 32% versus SD-OCT: 24.4%, k = 0.2) IRH, the inter-device agreement was weak. The inter-observer agreement was strong with k = 0.9 for SD-OCT and k = 0.8 for CF. The mean area of IRH detected on SD-OCT was significantly greater than on CF (SD-OCT: 11.5 ± 4.3 mm(2) versus CF: 8.1 ± 5.5 mm(2), p = 0.008). CONCLUSIONS IRH seems to be detectable on SD-OCT; however, the previously established severity grading agreed weakly with that assessed by CF.
Resumo:
When analysing software metrics, users find that visualisation tools lack support for (1) the detection of patterns within metrics; and (2) enabling analysis of software corpora. In this paper we present Explora, a visualisation tool designed for the simultaneous analysis of multiple metrics of systems in software corpora. Explora incorporates a novel lightweight visualisation technique called PolyGrid that promotes the detection of graphical patterns. We present an example where we analyse the relation of subtype polymorphism with inheritance and invocation in corpora of Smalltalk and Java systems and find that (1) subtype polymorphism is more likely to be found in large hierarchies; (2) as class hierarchies grow horizontally, they also do so vertically; and (3) in polymorphic hierarchies the length of the name of the classes is orthogonal to the cardinality of the call sites.