982 resultados para Automated Software Debugging


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Summarised retinal vessel diameters are linked to systemic vascular pathology. Monochromatic images provide best contrast to measure vessel calibres. However, when obtaining images with a dual wavelength oximeter the red-free image can be extracted as the green channel information only which in turn will reduce the number of photographs taken at a given time. This will reduce patient exposure to the camera flash and could provide sufficient quality images to reliably measure vessel calibres. Methods: We obtained retinal images of one eye of 45 healthy participants. Central retinal arteriolar and central retinal venular equivalents (CRAE and CRVE, respectively) were measured using semi-automated software from two monochromatic images: one taken with a red-free filter and one extracted from the green channel of a dual wavelength oximetry image. Results: Participants were aged between 21 and 62 years, all were normotensive (SBP: 115 (12) mmHg; DBP: 72 (10) mmHg) and had normal intra-ocular pressures (12 (3) mmHg). Bland-Altman analysis revealed good agreement of CRAE and CRVE as obtained from both images (mean bias CRAE = 0.88; CRVE = 2.82). Conclusions: Summarised retinal vessel calibre measurements obtained from oximetry images are in good agreement to those obtained using red-free photographs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software deployment, eller mjukvarudistribution översatt till svenska kan ses som processen där alla aktiviteter ingår för att göra en mjukvara tillgänglig för användare utan en manuell installation på användarens dator eller annan maskin. Det finns ett flertal software deployment-verktyg, som hanterar automatiska installationer, tillgängliga för företag på marknaden idag.  Avdelningen HVDC på ABB i Ludvika har behov att börja använda ett verktyg för automatiserade installationer av mjukvaror då installationer idag utförs manuellt och är tidsödande. Som Microsoftpartners vill ABB se hur Microsofts verktyg för mjukvarudistribution skulle kunna hjälpa för detta behov.  Vår studie syftade till att undersöka hur arbetet med installationer av mjukvara ser ut idag, samt hitta förbättringsmöjligheter för installationer som inte kan automatiseras i nuläget. I studien ingick även att ta fram ett generellt ramverk för hur verksamheter kan gå tillväga när de vill börja använda sig utav software deployment-verktyg. I ramverket ingår en utformad kravspecifikation som ska utvärderas mot Microsofts verktyg.  För att skapa en uppfattning om hur arbetet i verksamheten ser ut idag har vi utfört enkätundersökning och intervjuer med personal på HVDC. För att utveckla ett ramverk har vi använt oss av insamlade data från de intervjuer, enkätundersökning och gruppintervju som utförts, detta för att identifiera krav och önskemål från personalen hos ett software deployment-verktyg. Litteraturstudier utfördes för att skapa en teoretisk referensram att utgå ifrån vid utvecklande av ramverket och kravspecifikationen.  Studien har resulterat i en beskrivning av software deployment, förbättringsmöjligheter i arbetet med installationer av mjukvara samt ett generellt ramverk som beskriver hur verksamheter kan gå tillväga när de ska börja använda ett software deployment-verktyg. Ramverket innehåller också en kravspecifikation som använts för att utvärdera Microsofts verktyg för mjukvarudistribution. I vår studie har vi inte sett att någon tidigare har tagit fram ett generellt ramverk och kravspecifikation som verksamheter kan använda sig av som underlag när de ska börja använda ett software deployment-verktyg. Vårt resultat av studien kan täcka upp detta kunskapsgap.    

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The development of self-adaptive software (SaS) has specific characteristics compared to traditional one, since it allows that changes to be incorporated at runtime. Automated processes have been used as a feasible solution to conduct the software adaptation at runtime. In parallel, reference model has been used to aggregate knowledge and architectural artifacts, since capture the systems essence of specific domains. However, there is currently no reference model based on reflection for the development of SaS. Thus, the main contribution of this paper is to present a reference model based on reflection for development of SaS that have a need to adapt at runtime. To present the applicability of this model, a case study was conducted and good perspective to efficiently contribute to the area of SaS has been obtained.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES: To determine the accuracy of automated vessel-segmentation software for vessel-diameter measurements based on three-dimensional contrast-enhanced magnetic resonance angiography (3D-MRA). METHOD: In 10 patients with high-grade carotid stenosis, automated measurements of both carotid arteries were obtained with 3D-MRA by two independent investigators and compared with manual measurements obtained by digital subtraction angiography (DSA) and 2D maximum-intensity projection (2D-MIP) based on MRA and duplex ultrasonography (US). In 42 patients undergoing carotid endarterectomy (CEA), intraoperative measurements (IOP) were compared with postoperative 3D-MRA and US. RESULTS: Mean interoperator variability was 8% for measurements by DSA and 11% by 2D-MIP, but there was no interoperator variability with the automated 3D-MRA analysis. Good correlations were found between DSA (standard of reference), manual 2D-MIP (rP=0.6) and automated 3D-MRA (rP=0.8). Excellent correlations were found between IOP, 3D-MRA (rP=0.93) and US (rP=0.83). CONCLUSION: Automated 3D-MRA-based vessel segmentation and quantification result in accurate measurements of extracerebral-vessel dimensions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many software engineers have found that it is difficult to understand, incorporate and use different formal models consistently in the process of software developments, especially for large and complex software systems. This is mainly due to the complex mathematical nature of the formal methods and the lack of tool support. It is highly desirable to have software models and their related software artefacts systematically connected and used collaboratively, rather than in isolation. The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. This paper proposed a framework that allows users to interconnect the knowledge about formal software models and other related documents using the semantic technology. We first propose a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them. We then develop a Semantic Web environment for representing and sharing formal Z/OZ models. A method with prototype tool is presented to enhance semantic query to software models and other artefacts. © 2014.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Automated acceptance testing is the testing of software done in higher level to test whether the system abides by the requirements desired by the business clients by the use of piece of script other than the software itself. This project is a study of the feasibility of acceptance tests written in Behavior Driven Development principle. The project includes an implementation part where automated accep- tance testing is written for Touch-point web application developed by Dewire (a software consultant company) for Telia (a telecom company) from the require- ments received from the customer (Telia). The automated acceptance testing is in Cucumber-Selenium framework which enforces Behavior Driven Development principles. The purpose of the implementation is to verify the practicability of this style of acceptance testing. From the completion of implementation, it was concluded that all the requirements from customer in real world can be converted into executable specifications and the process was not at all time-consuming or difficult for a low-experienced programmer like the author itself. The project also includes survey to measure the learnability and understandability of Gherkin- the language that Cucumber understands. The survey consist of some Gherkin exam- ples followed with questions that include making changes to the Gherkin exam- ples. Survey had 3 parts: first being easy, second medium and third most difficult. Survey also had a linear scale from 1 to 5 to rate the difficulty level for each part of the survey. 1 stood for very easy and 5 for very difficult. Time when the partic- ipants began the survey was also taken in order to calculate the total time taken by the participants to learn and answer the questions. Survey was taken by 18 of the employers of Dewire who had primary working role as one of the programmer, tester and project manager. In the result, tester and project manager were grouped as non-programmer. The survey concluded that it is very easy and quick to learn Gherkin. While the participants rated Gherkin as very easy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate the ability of the GDx Variable Corneal Compensation (VCC) Guided Progression Analysis (GPA) software for detecting glaucomatous progression. Design: Observational cohort study. Participants: The study included 453 eyes from 252 individuals followed for an average of 46 +/- 14 months as part of the Diagnostic Innovations in Glaucoma Study. At baseline, 29% of the eyes were classified as glaucomatous, 67% of the eyes were classified as suspects, and 5% of the eyes were classified as healthy. Methods: Images were obtained annually with the GDx VCC and analyzed for progression using the Fast Mode of the GDx GPA software. Progression using conventional methods was determined by the GPA software for standard automated achromatic perimetry (SAP) and by masked assessment of optic disc stereophotographs by expert graders. Main Outcome Measures: Sensitivity, specificity, and likelihood ratios (LRs) for detection of glaucoma progression using the GDx GPA were calculated with SAP and optic disc stereophotographs used as reference standards. Agreement among the different methods was reported using the AC(1) coefficient. Results: Thirty-four of the 431 glaucoma and glaucoma suspect eyes (8%) showed progression by SAP or optic disc stereophotographs. The GDx GPA detected 17 of these eyes for a sensitivity of 50%. Fourteen eyes showed progression only by the GDx GPA with a specificity of 96%. Positive and negative LRs were 12.5 and 0.5, respectively. None of the healthy eyes showed progression by the GDx GPA, with a specificity of 100% in this group. Inter-method agreement (AC1 coefficient and 95% confidence intervals) for non-progressing and progressing eyes was 0.96 (0.94-0.97) and 0.44 (0.28-0.61), respectively. Conclusions: The GDx GPA detected glaucoma progression in a significant number of cases showing progression by conventional methods, with high specificity and high positive LRs. Estimates of the accuracy for detecting progression suggest that the GDx GPA could be used to complement clinical evaluation in the detection of longitudinal change in glaucoma. Financial Disclosure(s): Proprietary or commercial disclosure may be found after the references. Ophthalmology 2010; 117: 462-470 (C) 2010 by the American Academy of Ophthalmology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concerns have been raised about the reproducibility of brachial artery reactivity (BAR), because subjective decisions regarding the location of interfaces may influence the measurement of very small changes in lumen diameter. We studied 120 consecutive patients with BAR to address if an automated technique could be applied, and if experience influenced reproducibility between two observers, one experienced and one inexperienced. Digital cineloops were measured automatically, using software that measures the leading edge of the endothelium and tracks this in sequential frames and also manually, where a set of three point-to-point measurements were averaged. There was a high correlation between automated and manual techniques for both observers, although less variability was present with expert readers. The limits of agreement overall for interobserver concordance were 0.13 +/-0.65 mm for the manual and 0.03 +/-0.74 mm for the automated measurement. For intraobserver concordance, the limits of agreement were -0.07 +/-0.38 mm for observer 1 and -0.16 +/-0.55 mm for observer 2. We concluded that BAR measurements were highly concordant between observers, although more concordant using the automated method, and that experience does affect concordance. Care must be taken to ensure that the same segments are measured between observers and serially.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The refinement calculus is a well-established theory for deriving program code from specifications. Recent research has extended the theory to handle timing requirements, as well as functional ones, and we have developed an interactive programming tool based on these extensions. Through a number of case studies completed using the tool, this paper explains how the tool helps the programmer by supporting the many forms of variables needed in the theory. These include simple state variables as in the untimed calculus, trace variables that model the evolution of properties over time, auxiliary variables that exist only to support formal reasoning, subroutine parameters, and variables shared between parallel processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: The correct identification of the underlying cause of death and its precise assignment to a code from the International Classification of Diseases are important issues to achieve accurate and universally comparable mortality statistics These factors, among other ones, led to the development of computer software programs in order to automatically identify the underlying cause of death. OBJECTIVE: This work was conceived to compare the underlying causes of death processed respectively by the Automated Classification of Medical Entities (ACME) and the "Sistema de Seleção de Causa Básica de Morte" (SCB) programs. MATERIAL AND METHOD: The comparative evaluation of the underlying causes of death processed respectively by ACME and SCB systems was performed using the input data file for the ACME system that included deaths which occurred in the State of S. Paulo from June to December 1993, totalling 129,104 records of the corresponding death certificates. The differences between underlying causes selected by ACME and SCB systems verified in the month of June, when considered as SCB errors, were used to correct and improve SCB processing logic and its decision tables. RESULTS: The processing of the underlying causes of death by the ACME and SCB systems resulted in 3,278 differences, that were analysed and ascribed to lack of answer to dialogue boxes during processing, to deaths due to human immunodeficiency virus [HIV] disease for which there was no specific provision in any of the systems, to coding and/or keying errors and to actual problems. The detailed analysis of these latter disclosed that the majority of the underlying causes of death processed by the SCB system were correct and that different interpretations were given to the mortality coding rules by each system, that some particular problems could not be explained with the available documentation and that a smaller proportion of problems were identified as SCB errors. CONCLUSION: These results, disclosing a very low and insignificant number of actual problems, guarantees the use of the version of the SCB system for the Ninth Revision of the International Classification of Diseases and assures the continuity of the work which is being undertaken for the Tenth Revision version.