954 resultados para Computer software - Quality control
Resumo:
Dissertação de mestrado, Qualidade em Análises, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2014
Resumo:
Process supervision is the activity focused on monitoring the process operation in order to deduce conditions to maintain the normality including when faults are present Depending on the number/distribution/heterogeneity of variables, behaviour situations, sub-processes, etc. from processes, human operators and engineers do not easily manipulate the information. This leads to the necessity of automation of supervision activities. Nevertheless, the difficulty to deal with the information complicates the design and development of software applications. We present an approach called "integrated supervision systems". It proposes multiple supervisors coordination to supervise multiple sub-processes whose interactions permit one to supervise the global process
Resumo:
Process supervision is the activity focused on monitoring the process operation in order to deduce conditions to maintain the normality including when faults are present Depending on the number/distribution/heterogeneity of variables, behaviour situations, sub-processes, etc. from processes, human operators and engineers do not easily manipulate the information. This leads to the necessity of automation of supervision activities. Nevertheless, the difficulty to deal with the information complicates the design and development of software applications. We present an approach called "integrated supervision systems". It proposes multiple supervisors coordination to supervise multiple sub-processes whose interactions permit one to supervise the global process
Resumo:
Tese (Doutorado em Tecnologia Nuclear)
Resumo:
Lors de ces dix dernières années, le coût de la maintenance des systèmes orientés objets s'est accru jusqu' à compter pour plus de 70% du coût total des systèmes. Cette situation est due à plusieurs facteurs, parmi lesquels les plus importants sont: l'imprécision des spécifications des utilisateurs, l'environnement d'exécution changeant rapidement et la mauvaise qualité interne des systèmes. Parmi tous ces facteurs, le seul sur lequel nous ayons un réel contrôle est la qualité interne des systèmes. De nombreux modèles de qualité ont été proposés dans la littérature pour contribuer à contrôler la qualité. Cependant, la plupart de ces modèles utilisent des métriques de classes (nombre de méthodes d'une classe par exemple) ou des métriques de relations entre classes (couplage entre deux classes par exemple) pour mesurer les attributs internes des systèmes. Pourtant, la qualité des systèmes par objets ne dépend pas uniquement de la structure de leurs classes et que mesurent les métriques, mais aussi de la façon dont celles-ci sont organisées, c'est-à-dire de leur conception, qui se manifeste généralement à travers les patrons de conception et les anti-patrons. Dans cette thèse nous proposons la méthode DEQUALITE, qui permet de construire systématiquement des modèles de qualité prenant en compte non seulement les attributs internes des systèmes (grâce aux métriques), mais aussi leur conception (grâce aux patrons de conception et anti-patrons). Cette méthode utilise une approche par apprentissage basée sur les réseaux bayésiens et s'appuie sur les résultats d'une série d'expériences portant sur l'évaluation de l'impact des patrons de conception et des anti-patrons sur la qualité des systèmes. Ces expériences réalisées sur 9 grands systèmes libres orientés objet nous permettent de formuler les conclusions suivantes: • Contre l'intuition, les patrons de conception n'améliorent pas toujours la qualité des systèmes; les implantations très couplées de patrons de conception par exemple affectent la structure des classes et ont un impact négatif sur leur propension aux changements et aux fautes. • Les classes participantes dans des anti-atrons sont beaucoup plus susceptibles de changer et d'être impliquées dans des corrections de fautes que les autres classes d'un système. • Un pourcentage non négligeable de classes sont impliquées simultanément dans des patrons de conception et dans des anti-patrons. Les patrons de conception ont un effet positif en ce sens qu'ils atténuent les anti-patrons. Nous appliquons et validons notre méthode sur trois systèmes libres orientés objet afin de démontrer l'apport de la conception des systèmes dans l'évaluation de la qualité.
Resumo:
Images obtained from high-throughput mass spectrometry (MS) contain information that remains hidden when looking at a single spectrum at a time. Image processing of liquid chromatography-MS datasets can be extremely useful for quality control, experimental monitoring and knowledge extraction. The importance of imaging in differential analysis of proteomic experiments has already been established through two-dimensional gels and can now be foreseen with MS images. We present MSight, a new software designed to construct and manipulate MS images, as well as to facilitate their analysis and comparison.
Resumo:
MRI has evolved into an important diagnostic technique in medical imaging. However, reliability of the derived diagnosis can be degraded by artifacts, which challenge both radiologists and automatic computer-aided diagnosis. This work proposes a fully-automatic method for measuring image quality of three-dimensional (3D) structural MRI. Quality measures are derived by analyzing the air background of magnitude images and are capable of detecting image degradation from several sources, including bulk motion, residual magnetization from incomplete spoiling, blurring, and ghosting. The method has been validated on 749 3D T(1)-weighted 1.5T and 3T head scans acquired at 36 Alzheimer's Disease Neuroimaging Initiative (ADNI) study sites operating with various software and hardware combinations. Results are compared against qualitative grades assigned by the ADNI quality control center (taken as the reference standard). The derived quality indices are independent of the MRI system used and agree with the reference standard quality ratings with high sensitivity and specificity (>85%). The proposed procedures for quality assessment could be of great value for both research and routine clinical imaging. It could greatly improve workflow through its ability to rule out the need for a repeat scan while the patient is still in the magnet bore.
Resumo:
A mail survey was conducted to assess current computer hardware use and perceived needs of potential users for software related to crop pest management in Nebraska. Surveys were sent to University of Nebraska-Lincoln agricultural extension agents, agribusiness personnel (including independent crop consultants), and crop producers identified by extension agents as computer users. There were no differences between the groups in several aspects of computer hardware use (percentage computer use, percentage IBM-compatible computer, amount of RAM memory, percentage with hard drive, hard drive size, or monitor graphics capability). Responses were similar among the three groups in several areas that are important to crop pest management (pest identification, pest biology, treatment decision making, control options, and pesticide selection), and a majority of each group expressed the need for additional sources of such information about insects, diseases, and weeds. However, agents mentioned vertebrate pest management information as a need more often than the other two groups. Also, majorities of each group expressed an interest in using computer software, if available, to obtain information in these areas. Appropriate software to address these needs should find an audience among all three groups.
Resumo:
Transportation Department, Office of University Research, Washington, D.C.
Resumo:
Federal Energy Administration, Office of Transportation Policy and Research, Washington, D.C.
Resumo:
Computer software plays an important role in business, government, society and sciences. To solve real-world problems, it is very important to measure the quality and reliability in the software development life cycle (SDLC). Software Engineering (SE) is the computing field concerned with designing, developing, implementing, maintaining and modifying software. The present paper gives an overview of the Data Mining (DM) techniques that can be applied to various types of SE data in order to solve the challenges posed by SE tasks such as programming, bug detection, debugging and maintenance. A specific DM software is discussed, namely one of the analytical tools for analyzing data and summarizing the relationships that have been identified. The paper concludes that the proposed techniques of DM within the domain of SE could be well applied in fields such as Customer Relationship Management (CRM), eCommerce and eGovernment. ACM Computing Classification System (1998): H.2.8.
Resumo:
The objective of this study was to develop a model to predict transport and fate of gasoline components of environmental concern in the Miami River by mathematically simulating the movement of dissolved benzene, toluene, xylene (BTX), and methyl-tertiary-butyl ether (MTBE) occurring from minor gasoline spills in the inter-tidal zone of the river. Computer codes were based on mathematical algorithms that acknowledge the role of advective and dispersive physical phenomena along the river and prevailing phase transformations of BTX and MTBE. Phase transformations included volatilization and settling. ^ The model used a finite-difference scheme of steady-state conditions, with a set of numerical equations that was solved by two numerical methods: Gauss-Seidel and Jacobi iterations. A numerical validation process was conducted by comparing the results from both methods with analytical and numerical reference solutions. Since similar trends were achieved after the numerical validation process, it was concluded that the computer codes algorithmically were correct. The Gauss-Seidel iteration yielded at a faster convergence rate than the Jacobi iteration. Hence, the mathematical code was selected to further develop the computer program and software. The model was then analyzed for its sensitivity. It was found that the model was very sensitive to wind speed but not to sediment settling velocity. ^ A computer software was developed with the model code embedded. The software was provided with two major user-friendly visualized forms, one to interface with the database files and the other to execute and present the graphical and tabulated results. For all predicted concentrations of BTX and MTBE, the maximum concentrations were over an order of magnitude lower than current drinking water standards. It should be pointed out, however, that smaller concentrations than the latter reported standards and values, although not harmful to humans, may be very harmful to organisms of the trophic levels of the Miami River ecosystem and associated waters. This computer model can be used for the rapid assessment and management of the effects of minor gasoline spills on inter-tidal riverine water quality. ^
Resumo:
Software development is an extremely complex process, during which human errors are introduced and result in faulty software systems. It is highly desirable and important that these errors can be prevented and detected as early as possible. Software architecture design is a high-level system description, which embodies many system features and properties that are eventually implemented in the final operational system. Therefore, methods for modeling and analyzing software architecture descriptions can help prevent and reveal human errors and thus improve software quality. Furthermore, if an analyzed software architecture description can be used to derive a partial software implementation, especially when the derivation can be automated, significant benefits can be gained with regard to both the system quality and productivity. This dissertation proposes a framework for an integrated analysis on both of the design and implementation. To ensure the desirable properties of the architecture model, we apply formal verification by using the model checking technique. To ensure the desirable properties of the implementation, we develop a methodology and the associated tool to translate an architecture specification into an implementation written in the combination of Arch-Java/Java/AspectJ programming languages. The translation is semi-automatic so that many manual programming errors can be prevented. Furthermore, the translation inserting monitoring code into the implementation such that runtime verification can be performed, this provides additional assurance for the quality of the implementation. Moreover, validations for the translations from architecture model to program are provided. Finally, several case studies are experimented and presented.
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
Resumo:
Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.