906 resultados para software quality


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite research showing the benefits of glycemic control, it remains suboptimal among adults with diabetes in the United States. Possible reasons include unaddressed risk factors as well as lack of awareness of its immediate and long term consequences. The objectives of this study were to, using cross-sectional data, 1) ascertain the association between suboptimal (Hemoglobin A1c (HbA1c) ≥7%), borderline (HbA1c 7-8.9%), and poor (HbA1c ≥9%) glycemic control and potentially new risk factors (e.g. work characteristics), and 2) assess whether aspects of poor health and well-being such as poor health related quality of life (HRQOL), unemployment, and missed-work are associated with glycemic control; and 3) using prospective data, assess the relationship between mortality risk and glycemic control in US adults with type 2 diabetes. Data from the 1988-1994 and 1999-2004 National Health and Nutrition Examination Surveys were used. HbA1c values were used to create dichotomous glycemic control indicators. Binary logistic regression models were used to assess relationships between risk factors, employment status and glycemic control. Multinomial logistic regression analyses were conducted to assess relationships between glycemic control and HRQOL variables. Zero-inflated Poisson regression models were used to assess relationships between missed work days and glycemic control. Cox-proportional hazard models were used to assess effects of glycemic control on mortality risk. Using STATA software, analyses were weighted to account for complex survey design and non-response. Multivariable models adjusted for socio-demographics, body mass index, among other variables. Results revealed that being a farm worker and working over 40 hours/week were risk factors for suboptimal glycemic control. Having greater days of poor mental was associated with suboptimal, borderline, and poor glycemic control. Having greater days of inactivity was associated with poor glycemic control while having greater days of poor physical health was associated with borderline glycemic control. There were no statistically significant relationships between glycemic control, self-reported general health, employment, and missed work. Finally, having an HbA1c value less than 6.5% was protective against mortality. The findings suggest that work-related factors are important in a person’s ability to reach optimal diabetes management levels. Poor glycemic control appears to have significant detrimental effects on HRQOL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spread of wireless networks and growing proliferation of mobile devices require the development of mobility control mechanisms to support the different demands of traffic in different network conditions. A major obstacle to developing this kind of technology is the complexity involved in handling all the information about the large number of Moving Objects (MO), as well as the entire signaling overhead required to manage these procedures in the network. Despite several initiatives have been proposed by the scientific community to address this issue they have not proved to be effective since they depend on the particular request of the MO that is responsible for triggering the mobility process. Moreover, they are often only guided by wireless medium statistics, such as Received Signal Strength Indicator (RSSI) of the candidate Point of Attachment (PoA). Thus, this work seeks to develop, evaluate and validate a sophisticated communication infrastructure for Wireless Networking for Moving Objects (WiNeMO) systems by making use of the flexibility provided by the Software-Defined Networking (SDN) paradigm, where network functions are easily and efficiently deployed by integrating OpenFlow and IEEE 802.21 standards. For purposes of benchmarking, the analysis was conducted in the control and data planes aspects, which demonstrate that the proposal significantly outperforms typical IPbased SDN and QoS-enabled capabilities, by allowing the network to handle the multimedia traffic with optimal Quality of Service (QoS) transport and acceptable Quality of Experience (QoE) over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Product quality planning is a fundamental part of quality assurance in manufacturing. It is composed of the distribution of quality aims over each phase in product development and the deployment of quality operations and resources to accomplish these aims. This paper proposes a quality planning methodology based on risk assessment and the planning tasks of product development are translated into evaluation of risk priorities. Firstly, a comprehensive model for quality planning is developed to address the deficiencies of traditional quality function deployment (QFD) based quality planning. Secondly, a novel failure knowledge base (FKB) based method is discussed. Then a mathematical method and algorithm of risk assessment is presented for target decomposition, measure selection, and sequence optimization. Finally, the proposed methodology has been implemented in a web based prototype software system, QQ-Planning, to solve the problem of quality planning regarding the distribution of quality targets and the deployment of quality resources, in such a way that the product requirements are satisfied and the enterprise resources are highly utilized. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computed tomography (CT) is a valuable technology to the healthcare enterprise as evidenced by the more than 70 million CT exams performed every year. As a result, CT has become the largest contributor to population doses amongst all medical imaging modalities that utilize man-made ionizing radiation. Acknowledging the fact that ionizing radiation poses a health risk, there exists the need to strike a balance between diagnostic benefit and radiation dose. Thus, to ensure that CT scanners are optimally used in the clinic, an understanding and characterization of image quality and radiation dose are essential.

The state-of-the-art in both image quality characterization and radiation dose estimation in CT are dependent on phantom based measurements reflective of systems and protocols. For image quality characterization, measurements are performed on inserts imbedded in static phantoms and the results are ascribed to clinical CT images. However, the key objective for image quality assessment should be its quantification in clinical images; that is the only characterization of image quality that clinically matters as it is most directly related to the actual quality of clinical images. Moreover, for dose estimation, phantom based dose metrics, such as CT dose index (CTDI) and size specific dose estimates (SSDE), are measured by the scanner and referenced as an indicator for radiation exposure. However, CTDI and SSDE are surrogates for dose, rather than dose per-se.

Currently there are several software packages that track the CTDI and SSDE associated with individual CT examinations. This is primarily the result of two causes. The first is due to bureaucracies and governments pressuring clinics and hospitals to monitor the radiation exposure to individuals in our society. The second is due to the personal concerns of patients who are curious about the health risks associated with the ionizing radiation exposure they receive as a result of their diagnostic procedures.

An idea that resonates with clinical imaging physicists is that patients come to the clinic to acquire quality images so they can receive a proper diagnosis, not to be exposed to ionizing radiation. Thus, while it is important to monitor the dose to patients undergoing CT examinations, it is equally, if not more important to monitor the image quality of the clinical images generated by the CT scanners throughout the hospital.

The purposes of the work presented in this thesis are threefold: (1) to develop and validate a fully automated technique to measure spatial resolution in clinical CT images, (2) to develop and validate a fully automated technique to measure image contrast in clinical CT images, and (3) to develop a fully automated technique to estimate radiation dose (not surrogates for dose) from a variety of clinical CT protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].

Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.

As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.

More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.

With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.

Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.

With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.

Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.

Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eight universities have collaborated in an Erasmus+ funded project to create a lean process to enhance self-evaluation and accreditation through peer alliance and cooperation. Central to this process is the partnering of two institutions as critical friends, based on prior selfevaluations of specific programmes to identify particular criteria for improvement. A pairing algorithm matches two institutions based on their respective self-evaluation scores. It ensures there are significant differences in key criteria that are mutually beneficial for future programme development and enhancement. The ensuing meetings between critical friends have been designated as ‘cross-sparring’. This paper focuses on a case-study of the crosssparring and resulting enhancement outcomes between Umeå University and Queen’s University Belfast, and their respective Masters programmes in Software Engineering and Mechanical Engineering. The collaborative experiences of the process are evaluated, reported, discussed and conclusions provided on the efficacy of this particular application of cross-sparring.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Polygonal meshes are powerful structures to represent geometric information of the Earth’s surface. In particular, triangle meshes have been massively used as a reliable way to efficiently represent the land surface with real time responses in virtual navigation. In this work we present new ideas for the underlying treatment of a mesh that improve efficiency and quality in the navigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New technologies appear each moment and its use can result in countless benefits for that they directly use and for all the society as well. In this direction, the State also can use the technologies of the information and communication to improve the level of rendering of services to the citizens, to give more quality of life to the society and to optimize the public expense, centering it in the main necessities. For this, it has many research on politics of Electronic Government (e-Gov) and its main effect for the citizen and the society as a whole. This research studies the concept of Electronic Government and wishes to understand the process of implementation of Free Softwares in the agencies of the Direct Administration in the Rio Grande do Norte. Moreover, it deepens the analysis to identify if its implantation results in reduction of cost for the state treasury and intends to identify the Free Software participation in the Administration and the bases of the politics of Electronic Government in this State. Through qualitative interviews with technologies coordinators and managers in 3 State Secretaries it could be raised the ways that come being trod for the Government in order to endow the State with technological capacity. It was perceived that the Rio Grande do Norte still is an immature State in relation to practical of electronic government (e-Gov) and with Free Softwares, where few agencies have factual and viable initiatives in this area. It still lacks of a strategical definition of the paper of Technology and more investments in infrastructure of staff and equipment. One also observed advances as the creation of the normative agency, the CETIC (State Advice of Technology of the Information and Communication), the Managing Plan of Technology that provide a necessary diagnosis with the situation how much Technology in the State and considered diverse goals for the area, the accomplishment of a course of after-graduation for managers of Technology and the training in BrOffice (OppenOffice) for 1120 public servers

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: Obfuscation is a common technique used to protect software against mali- cious reverse engineering. Obfuscators manipulate the source code to make it harder to analyze and more difficult to understand for the attacker. Although different ob- fuscation algorithms and implementations are available, they have never been directly compared in a large scale study. Aim: This paper aims at evaluating and quantifying the effect of several different obfuscation implementations (both open source and commercial), to help developers and project manager to decide which one could be adopted. Method: In this study we applied 44 obfuscations to 18 subject applications covering a total of 4 millions lines of code. The effectiveness of these source code obfuscations has been measured using 10 code metrics, considering modularity, size and complexity of code. Results: Results show that some of the considered obfuscations are effective in mak- ing code metrics change substantially from original to obfuscated code, although this change (called potency of the obfuscation) is different on different metrics. In the pa- per we recommend which obfuscations to select, given the security requirements of the software to be protected.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate if physical measures of noise predict image quality at high and low noise levels. Method: Twenty-four images were acquired on a DR system using a Pehamed DIGRAD phantom at three kVp settings (60, 70 and 81) across a range of mAs values. The image acquisition setup consisted of 14 cm of PMMA slabs with the phantom placed in the middle at 120 cm SID. Signal-to-noise ratio (SNR) and Contrast-tonoise ratio (CNR) were calculated for each of the images using ImageJ software and 14 observers performed image scoring. Images were scored according to the observer`s evaluation of objects visualized within the phantom. Results: The R2 values of the non-linear relationship between objective visibility score and CNR (60kVp R2 = 0.902; 70Kvp R2 = 0.913; 80kVp R2 = 0.757) demonstrate a better fit for all 3 kVp settings than the linear R2 values. As CNR increases for all kVp settings the Object Visibility also increases. The largest increase for SNR at low exposure values (up to 2 mGy) is observed at 60kVp, when compared with 70 or 81kVp.CNR response to exposure is similar. Pearson r was calculated to assess the correlation between Score, OV, SNR and CNR. None of the correlations reached a level of statistical significance (p>0.01). Conclusion: For object visibility and SNR, tube potential variations may play a role in object visibility. Higher energy X-ray beam settings give lower SNR but higher object visibility. Object visibility and CNR at all three tube potentials are similar, resulting in a strong positive relationship between CNR and object visibility score. At low doses the impact of radiographic noise does not have a strong influence on object visibility scores because in noisy images objects could still be identified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large component-based systems are often built from many of the same components. As individual component-based software systems are developed, tested and maintained, these shared components are repeatedly manipulated. As a result there are often significant overlaps and synergies across and among the different test efforts of different component-based systems. However, in practice, testers of different systems rarely collaborate, taking a test-all-by-yourself approach. As a result, redundant effort is spent testing common components, and important information that could be used to improve testing quality is lost. The goal of this research is to demonstrate that, if done properly, testers of shared software components can save effort by avoiding redundant work, and can improve the test effectiveness for each component as well as for each component-based software system by using information obtained when testing across multiple components. To achieve this goal I have developed collaborative testing techniques and tools for developers and testers of component-based systems with shared components, applied the techniques to subject systems, and evaluated the cost and effectiveness of applying the techniques. The dissertation research is organized in three parts. First, I investigated current testing practices for component-based software systems to find the testing overlap and synergy we conjectured exists. Second, I designed and implemented infrastructure and related tools to facilitate communication and data sharing between testers. Third, I designed two testing processes to implement different collaborative testing algorithms and applied them to large actively developed software systems. This dissertation has shown the benefits of collaborative testing across component developers who share their components. With collaborative testing, researchers can design algorithms and tools to support collaboration processes, achieve better efficiency in testing configurations, and discover inter-component compatibility faults within a minimal time window after they are introduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded software systems in vehicles are of rapidly increasing commercial importance for the automotive industry. Current systems employ a static run-time environment; due to the difficulty and cost involved in the development of dynamic systems in a high-integrity embedded control context. A dynamic system, referring to the system configuration, would greatly increase the flexibility of the offered functionality and enable customised software configuration for individual vehicles, adding customer value through plug-and-play capability, and increased quality due to its inherent ability to adjust to changes in hardware and software. We envisage an automotive system containing a variety of components, from a multitude of organizations, not necessarily known at development time. The system dynamically adapts its configuration to suit the run-time system constraints. This paper presents our vision for future automotive control systems that will be regarded in an EU research project, referred to as DySCAS (Dynamically Self-Configuring Automotive Systems). We propose a self-configuring vehicular control system architecture, with capabilities that include automatic discovery and inclusion of new devices, self-optimisation to best-use the processing, storage and communication resources available, self-diagnostics and ultimately self-healing. Such an architecture has benefits extending to reduced development and maintenance costs, improved passenger safety and comfort, and flexible owner customisation. Specifically, this paper addresses the following issues: The state of the art of embedded software systems in vehicles, emphasising the current limitations arising from fixed run-time configurations; and the benefits and challenges of dynamic configuration, giving rise to opportunities for self-healing, self-optimisation, and the automatic inclusion of users’ Consumer Electronic (CE) devices. Our proposal for a dynamically reconfigurable automotive software system platform is outlined and a typical use-case is presented as an example to exemplify the benefits of the envisioned dynamic capabilities.