9 resultados para Software quality

em Digital Commons at Florida International University


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software development is an extremely complex process, during which human errors are introduced and result in faulty software systems. It is highly desirable and important that these errors can be prevented and detected as early as possible. Software architecture design is a high-level system description, which embodies many system features and properties that are eventually implemented in the final operational system. Therefore, methods for modeling and analyzing software architecture descriptions can help prevent and reveal human errors and thus improve software quality. Furthermore, if an analyzed software architecture description can be used to derive a partial software implementation, especially when the derivation can be automated, significant benefits can be gained with regard to both the system quality and productivity. This dissertation proposes a framework for an integrated analysis on both of the design and implementation. To ensure the desirable properties of the architecture model, we apply formal verification by using the model checking technique. To ensure the desirable properties of the implementation, we develop a methodology and the associated tool to translate an architecture specification into an implementation written in the combination of Arch-Java/Java/AspectJ programming languages. The translation is semi-automatic so that many manual programming errors can be prevented. Furthermore, the translation inserting monitoring code into the implementation such that runtime verification can be performed, this provides additional assurance for the quality of the implementation. Moreover, validations for the translations from architecture model to program are provided. Finally, several case studies are experimented and presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to develop a model to predict transport and fate of gasoline components of environmental concern in the Miami River by mathematically simulating the movement of dissolved benzene, toluene, xylene (BTX), and methyl-tertiary-butyl ether (MTBE) occurring from minor gasoline spills in the inter-tidal zone of the river. Computer codes were based on mathematical algorithms that acknowledge the role of advective and dispersive physical phenomena along the river and prevailing phase transformations of BTX and MTBE. Phase transformations included volatilization and settling. ^ The model used a finite-difference scheme of steady-state conditions, with a set of numerical equations that was solved by two numerical methods: Gauss-Seidel and Jacobi iterations. A numerical validation process was conducted by comparing the results from both methods with analytical and numerical reference solutions. Since similar trends were achieved after the numerical validation process, it was concluded that the computer codes algorithmically were correct. The Gauss-Seidel iteration yielded at a faster convergence rate than the Jacobi iteration. Hence, the mathematical code was selected to further develop the computer program and software. The model was then analyzed for its sensitivity. It was found that the model was very sensitive to wind speed but not to sediment settling velocity. ^ A computer software was developed with the model code embedded. The software was provided with two major user-friendly visualized forms, one to interface with the database files and the other to execute and present the graphical and tabulated results. For all predicted concentrations of BTX and MTBE, the maximum concentrations were over an order of magnitude lower than current drinking water standards. It should be pointed out, however, that smaller concentrations than the latter reported standards and values, although not harmful to humans, may be very harmful to organisms of the trophic levels of the Miami River ecosystem and associated waters. This computer model can be used for the rapid assessment and management of the effects of minor gasoline spills on inter-tidal riverine water quality. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite research showing the benefits of glycemic control, it remains suboptimal among adults with diabetes in the United States. Possible reasons include unaddressed risk factors as well as lack of awareness of its immediate and long term consequences. The objectives of this study were to, using cross-sectional data, (1) ascertain the association between suboptimal (Hemoglobin A1c (HbA1c) .7%), borderline (HbA1c 7-8.9%), and poor (HbA1c .9%) glycemic control and potentially new risk factors (e.g. work characteristics), and (2) assess whether aspects of poor health and well-being such as poor health related quality of life (HRQOL), unemployment, and missed-work are associated with glycemic control; and (3) using prospective data, assess the relationship between mortality risk and glycemic control in US adults with type 2 diabetes. Data from the 1988-1994 and 1999-2004 National Health and Nutrition Examination Surveys were used. HbA1c values were used to create dichotomous glycemic control indicators. Binary logistic regression models were used to assess relationships between risk factors, employment status and glycemic control. Multinomial logistic regression analyses were conducted to assess relationships between glycemic control and HRQOL variables. Zero-inflated Poisson regression models were used to assess relationships between missed work days and glycemic control. Cox-proportional hazard models were used to assess effects of glycemic control on mortality risk. Using STATA software, analyses were weighted to account for complex survey design and non-response. Multivariable models adjusted for socio-demographics, body mass index, among other variables. Results revealed that being a farm worker and working over 40 hours/week were risk factors for suboptimal glycemic control. Having greater days of poor mental was associated with suboptimal, borderline, and poor glycemic control. Having greater days of inactivity was associated with poor glycemic control while having greater days of poor physical health was associated with borderline glycemic control. There were no statistically significant relationships between glycemic control, self-reported general health, employment, and missed work. Finally, having an HbA1c value less than 6.5% was protective against mortality. The findings suggest that work-related factors are important in a person’s ability to reach optimal diabetes management levels. Poor glycemic control appears to have significant detrimental effects on HRQOL.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of 3G (the 3rd generation telecommunication) value-added services brings higher requirements of Quality of Service (QoS). Wideband Code Division Multiple Access (WCDMA) is one of three 3G standards, and enhancement of QoS for WCDMA Core Network (CN) becomes more and more important for users and carriers. The dissertation focuses on enhancement of QoS for WCDMA CN. The purpose is to realize the DiffServ (Differentiated Services) model of QoS for WCDMA CN. Based on the parallelism characteristic of Network Processors (NPs), the NP programming model is classified as Pool of Threads (POTs) and Hyper Task Chaining (HTC). In this study, an integrated programming model that combines both of the two models was designed. This model has highly efficient and flexible features, and also solves the problems of sharing conflicts and packet ordering. We used this model as the programming model to realize DiffServ QoS for WCDMA CN. ^ The realization mechanism of the DiffServ model mainly consists of buffer management, packet scheduling and packet classification algorithms based on NPs. First, we proposed an adaptive buffer management algorithm called Packet Adaptive Fair Dropping (PAFD), which takes into consideration of both fairness and throughput, and has smooth service curves. Then, an improved packet scheduling algorithm called Priority-based Weighted Fair Queuing (PWFQ) was introduced to ensure the fairness of packet scheduling and reduce queue time of data packets. At the same time, the delay and jitter are also maintained in a small range. Thirdly, a multi-dimensional packet classification algorithm called Classification Based on Network Processors (CBNPs) was designed. It effectively reduces the memory access and storage space, and provides less time and space complexity. ^ Lastly, an integrated hardware and software system of the DiffServ model of QoS for WCDMA CN was proposed. It was implemented on the NP IXP2400. According to the corresponding experiment results, the proposed system significantly enhanced QoS for WCDMA CN. It extensively improves consistent response time, display distortion and sound image synchronization, and thus increases network efficiency and saves network resource.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite research showing the benefits of glycemic control, it remains suboptimal among adults with diabetes in the United States. Possible reasons include unaddressed risk factors as well as lack of awareness of its immediate and long term consequences. The objectives of this study were to, using cross-sectional data, 1) ascertain the association between suboptimal (Hemoglobin A1c (HbA1c) ≥7%), borderline (HbA1c 7-8.9%), and poor (HbA1c ≥9%) glycemic control and potentially new risk factors (e.g. work characteristics), and 2) assess whether aspects of poor health and well-being such as poor health related quality of life (HRQOL), unemployment, and missed-work are associated with glycemic control; and 3) using prospective data, assess the relationship between mortality risk and glycemic control in US adults with type 2 diabetes. Data from the 1988-1994 and 1999-2004 National Health and Nutrition Examination Surveys were used. HbA1c values were used to create dichotomous glycemic control indicators. Binary logistic regression models were used to assess relationships between risk factors, employment status and glycemic control. Multinomial logistic regression analyses were conducted to assess relationships between glycemic control and HRQOL variables. Zero-inflated Poisson regression models were used to assess relationships between missed work days and glycemic control. Cox-proportional hazard models were used to assess effects of glycemic control on mortality risk. Using STATA software, analyses were weighted to account for complex survey design and non-response. Multivariable models adjusted for socio-demographics, body mass index, among other variables. Results revealed that being a farm worker and working over 40 hours/week were risk factors for suboptimal glycemic control. Having greater days of poor mental was associated with suboptimal, borderline, and poor glycemic control. Having greater days of inactivity was associated with poor glycemic control while having greater days of poor physical health was associated with borderline glycemic control. There were no statistically significant relationships between glycemic control, self-reported general health, employment, and missed work. Finally, having an HbA1c value less than 6.5% was protective against mortality. The findings suggest that work-related factors are important in a person’s ability to reach optimal diabetes management levels. Poor glycemic control appears to have significant detrimental effects on HRQOL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.