954 resultados para Computer software - Quality control


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Underbody plows can be very useful tools in winter maintenance, especially when compacted snow or hard ice must be removed from the roadway. By the application of significant down-force, and the use of an appropriate cutting edge angle, compacted snow and ice can be removed very effectively by such plows, with much greater efficiency than any other tool under those circumstances. However, the successful operation of an underbody plow requires considerable skill. If too little down pressure is applied to the plow, then it will not cut the ice or compacted snow. However, if too much force is applied, then either the cutting edge may gouge the road surface, causing significant damage often to both the road surface and the plow, or the plow may ride up on the cutting edge so that it is no longer controllable by the operator. Spinning of the truck in such situations is easily accomplished. Further, excessive down force will result in rapid wear of the cutting edge. Given this need for a high level of operator skill, the operation of an underbody plow is a candidate for automation. In order to successfully automate the operation of an underbody plow, a control system must be developed that follows a set of rules that represent appropriate operation of such a plow. These rules have been developed, based upon earlier work in which operational underbody plows were instrumented to determine the loading upon them (both vertical and horizontal) and the angle at which the blade was operating.These rules have been successfully coded into two different computer programs, both using the MatLab® software. In the first program, various load and angle inputs are analyzed to determine when, whether, and how they violate the rules of operation. This program is essentially deterministic in nature. In the second program, the Simulink® package in the MatLab® software system was used to implement these rules using fuzzy logic. Fuzzy logic essentially replaces a fixed and constant rule with one that varies in such a way as to improve operational control. The development of the fuzzy logic in this simulation was achieved simply by using appropriate routines in the computer software, rather than being developed directly. The results of the computer testing and simulation indicate that a fully automated, computer controlled underbody plow is indeed possible. The issue of whether the next steps toward full automation should be taken (and by whom) has also been considered, and the possibility of some sort of joint venture between a Department of Transportation and a vendor has been suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this work is to develop a method to objectively compare the performance of a digital and a screen-film mammography system in terms of image quality. The method takes into account the dynamic range of the image detector, the detection of high and low contrast structures, the visualisation of the images and the observer response. A test object, designed to represent a compressed breast, was constructed from various tissue equivalent materials ranging from purely adipose to purely glandular composition. Different areas within the test object permitted the evaluation of low and high contrast detection, spatial resolution and image noise. All the images (digital and conventional) were captured using a CCD camera to include the visualisation process in the image quality assessment. A mathematical model observer (non-prewhitening matched filter), that calculates the detectability of high and low contrast structures using spatial resolution, noise and contrast, was used to compare the two technologies. Our results show that for a given patient dose, the detection of high and low contrast structures is significantly better for the digital system than for the conventional screen-film system studied. The method of using a test object with a large tissue composition range combined with a camera to compare conventional and digital imaging modalities can be applied to other radiological imaging techniques. In particular it could be used to optimise the process of radiographic reading of soft copy images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A specification for contractor moisture quality control (QC) in roadway embankment construction has been in use for approximately 10 years in Iowa on about 190 projects. The use of this QC specification and the development of the soils certification program for the Iowa Department of Transportation (DOT) originated from Iowa Highway Research Board (IHRB) embankment quality research projects. Since this research, the Iowa DOT has applied compaction with moisture control on most embankment work under pavements. This study set out to independently evaluate the actual quality of compaction using the current specifications. Results show that Proctor tests conducted by Iowa State University (ISU) using representative material obtained from each test section where field testing was conducted had optimum moisture contents and maximum dry densities that are different from what was selected by the Iowa DOT for QC/quality assurance (QA) testing. Comparisons between the measured and selected values showed a standard error of 2.9 lb/ft3 for maximum dry density and 2.1% for optimum moisture content. The difference in optimum moisture content was as high as 4% and the difference in maximum dry density was as high as 6.5 lb/ft3 . The difference at most test locations, however, were within the allowable variation suggested in AASHTO T 99 for test results between different laboratories. The ISU testing results showed higher rates of data outside of the target limits specified based on the available contractor QC data for cohesive materials. Also, during construction observations, wet fill materials were often observed. Several test points indicated that materials were placed and accepted at wet of the target moisture contents. The statistical analysis results indicate that the results obtained from this study showed improvements over results from previous embankment quality research projects (TR-401 Phases I through III and TR-492) in terms of the percentage of data that fell within the specification limits. Although there was evidence of improvement, QC/QA results are not consistently meeting the target limits/values. Recommendations are provided in this report for Iowa DOT consideration with three proposed options for improvements to the current specifications. Option 1 provides enhancements to current specifications in terms of material-dependent control limits, training, sampling, and process control. Option 2 addresses development of alternative specifications that incorporate dynamic cone penetrometer or light weight deflectometer testing into QC/QA. Option 3 addresses incorporating calibrated intelligent compaction measurements into QC/QA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new approach and related indicators for globally distributed software support and development based on a 3-year process improvement project in a globally distributed engineering company. The company develops, delivers and supports a complex software system with tailored hardware components and unique end-customer installations. By applying the domain knowledge from operations management on lead time reduction and its multiple benefits to process performance, the workflows of globally distributed software development and multitier support processes were measured and monitored throughout the company. The results show that the global end-to-end process visibility and centrally managed reporting at all levels of the organization catalyzed a change process toward significantly better performance. Due to the new performance indicators based on lead times and their variation with fixed control procedures, the case company was able to report faster bug-fixing cycle times, improved response times and generally better customer satisfaction in its global operations. In all, lead times to implement new features and to respond to customer issues and requests were reduced by 50%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND & AIMS: Trace elements (TE) are involved in the immune and antioxidant defences which are of particular importance during critical illness. Determining plasma TE levels is costly. The present quality control study aimed at assessing the economic impact of a computer reminded blood sampling versus a risk guided on-demand monitoring of plasma concentrations of selenium, copper, and zinc. METHODS: Retrospective analysis of 2 cohorts of patients admitted during 6 months periods in 2006 and 2009 to the ICU of a University hospital. INCLUSION CRITERIA: to receive intravenous micronutrient supplements and/or to have a TE sampling during ICU stay. The TE samplings were triggered by computerized reminder in 2006 versus guided by nutritionists in 2009. RESULTS: During the 2 periods 636 patients met the inclusion criteria out of 2406 consecutive admissions, representing 29.7% and 24.9% respectively of the periods' admissions. The 2009 patients had higher SAPS2 scores (p = 0.02) and lower BMI compared to 2006 (p = 0.007). The number of laboratory determinations was drastically reduced in 2009, particularly during the first week, despite the higher severity of the cohort, resulting in à 55% cost reduction. CONCLUSIONS: The monitoring of TE concentrations guided by a nutritionist resulted in a reduction of the sampling frequency, and targeting on the sickest high risk patients, requiring a nutritional prescription adaptation. This control leads to cost reduction compared to an automated sampling prescription.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Jatkuva laadunmittaus osana ohjelmistoprosessia on yleistynyt ohjelmistoyritysten keskuudessa viime vuosien aikana. ISO 9001:2000 -laatustandardi vaatii yrityksiltä tuotteiden ja prosessien laadun mittaamista ja seuraamista. Laadun mittareiden valinta on haastava tehtävä. Yritykset luulevat usein mittaavansa laatua, vaikka ne todellisuudessa mittaavatkin ohjelmistojen eri ominaisuuksia kuten kokoa tai monimutkaisuutta. Tässä diplomityössä kehitetään ohjelmistojen validointiprosessiin vertailuun perustuva laadunmittausprosessi ohjelmistotuotteiden laadun arviointiin, mittaamiseen ja seurantaan. Laatumittarit valitaan ennalta määriteltyjen kriteereiden mukaisesti, ja niille asetetaan tavoitearvot vertailuanalyysistä saatujen tulosten perusteella. Laadunmittausprosessin lisäksi työssä annetaan suositus prosessin käyttöönotosta ja käytöstä osana yrityksen toimintaa, mikä mahdollistaa jatkuvan seurannan sekä kehityksen tulevaisuudessa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To develop procedures to ensure consistency of printing quality of digital images, by means of hardcopy quantitative analysis based on a standard image. Materials and Methods Characteristics of mammography DI-ML and general purpose DI-HL films were studied through the QC-Test utilizing different processing techniques in a FujiFilm®-DryPix4000 printer. A software was developed for sensitometric evaluation, generating a digital image including a gray scale and a bar pattern to evaluate contrast and spatial resolution. Results Mammography films showed maximum optical density of 4.11 and general purpose films, 3.22. The digital image was developed with a 33-step wedge scale and a high-contrast bar pattern (1 to 30 lp/cm) for spatial resolution evaluation. Conclusion Mammographic films presented higher values for maximum optical density and contrast resolution as compared with general purpose films. The utilized digital processing technique could only change the image pixels matrix values and did not affect the printing standard. The proposed digital image standard allows greater control of the relationship between pixels values and optical density obtained in the analysis of films quality and printing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Joints intended for welding frequently show variations in geometry and position, for which it is unfortunately not possible to apply a single set of operating parameters to ensure constant quality. The cause of this difficulty lies in a number of factors, including inaccurate joint preparation and joint fit up, tack welds, as well as thermal distortion of the workpiece. In plasma arc keyhole welding of butt joints, deviations in the gap width may cause weld defects such as an incomplete weld bead, excessive penetration and burn through. Manual adjustment of welding parameters to compensate for variations in the gap width is very difficult, and unsatisfactory weld quality is often obtained. In this study a control system for plasma arc keyhole welding has been developed and used to study the effects of the real time control of welding parameters on gap tolerance during welding of austenitic stainless steel AISI 304L. The welding tests demonstrated the beneficial effect of real time control on weld quality. Compared with welding using constant parameters, the maximum tolerable gap width with an acceptable weld quality was 47% higher when using the real time controlled parameters for a plate thickness of 5 mm. In addition, burn through occurred with significantly larger gap widths when parameters were controlled in real time. Increased gap tolerance enables joints to be prepared and fit up less accurately, saving time and preparation costs for welding. In addition to the control system, a novel technique for back face monitoring is described in this study. The test results showed that the technique could be successfully applied for penetration monitoring when welding non magnetic materials. The results also imply that it is possible to measure the dimensions of the plasma efflux or weld root, and use this information in a feedback control system and, thus, maintain the required weld quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software systems are expanding and becoming increasingly present in everyday activities. The constantly evolving society demands that they deliver more functionality, are easy to use and work as expected. All these challenges increase the size and complexity of a system. People may not be aware of a presence of a software system, until it malfunctions or even fails to perform. The concept of being able to depend on the software is particularly significant when it comes to the critical systems. At this point quality of a system is regarded as an essential issue, since any deficiencies may lead to considerable money loss or life endangerment. Traditional development methods may not ensure a sufficiently high level of quality. Formal methods, on the other hand, allow us to achieve a high level of rigour and can be applied to develop a complete system or only a critical part of it. Such techniques, applied during system development starting at early design stages, increase the likelihood of obtaining a system that works as required. However, formal methods are sometimes considered difficult to utilise in traditional developments. Therefore, it is important to make them more accessible and reduce the gap between the formal and traditional development methods. This thesis explores the usability of rigorous approaches by giving an insight into formal designs with the use of graphical notation. The understandability of formal modelling is increased due to a compact representation of the development and related design decisions. The central objective of the thesis is to investigate the impact that rigorous approaches have on quality of developments. This means that it is necessary to establish certain techniques for evaluation of rigorous developments. Since we are studying various development settings and methods, specific measurement plans and a set of metrics need to be created for each setting. Our goal is to provide methods for collecting data and record evidence of the applicability of rigorous approaches. This would support the organisations in making decisions about integration of formal methods into their development processes. It is important to control the software development, especially in its initial stages. Therefore, we focus on the specification and modelling phases, as well as related artefacts, e.g. models. These have significant influence on the quality of a final system. Since application of formal methods may increase the complexity of a system, it may impact its maintainability, and thus quality. Our goal is to leverage quality of a system via metrics and measurements, as well as generic refinement patterns, which are applied to a model and a specification. We argue that they can facilitate the process of creating software systems, by e.g. controlling complexity and providing the modelling guidelines. Moreover, we find them as additional mechanisms for quality control and improvement, also for rigorous approaches. The main contribution of this thesis is to provide the metrics and measurements that help in assessing the impact of rigorous approaches on developments. We establish the techniques for the evaluation of certain aspects of quality, which are based on structural, syntactical and process related characteristics of an early-stage development artefacts, i.e. specifications and models. The presented approaches are applied to various case studies. The results of the investigation are juxtaposed with the perception of domain experts. It is our aspiration to promote measurements as an indispensable part of quality control process and a strategy towards the quality improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to evaluate the application possibility of tabular CUSUM control charts in the quality control of chemical variables in surface water. It was performed bibliographic and field research to collect water samples from 2003 to 2009, totaling 30 samples, some monthly and others semi-annual in order to observe the variables that regulate water quality. It was found that these charts may be applied to control the quality of river water; showing to be effective in the perception of changes during the process, especially for small samples (n=1) which there is no repetition as in this research. It was also concluded that the Mandurim River does not presents significant levels of pollution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACTObjective:to analyze the implementation of a trauma registry in a university teaching hospital delivering care under the unified health system (SUS), and its ability to identify points for improvement in the quality of care provided.Methods:the data collection group comprised students from medicine and nursing courses who were holders of FAPESP scholarships (technical training 1) or otherwise, overseen by the coordinators of the project. The itreg (ECO Sistemas-RJ/SBAIT) software was used as the database tool. Several quality "filters" were proposed to select those cases for review in the quality control process.Results:data for 1344 trauma patients were input to the itreg database between March and November 2014. Around 87.0% of cases were blunt trauma patients, 59.6% had RTS>7.0 and 67% ISS<9. Full records were available for 292 cases, which were selected for review in the quality program. The auditing filters most frequently registered were laparotomy four hours after admission and drainage of acute subdural hematomas four hours after admission. Several points for improvement were flagged, such as control of overtriage of patients, the need to reduce the number of negative imaging exams, the development of protocols for achieving central venous access, and management of major TBI.Conclusion: the trauma registry provides a clear picture of the points to be improved in trauma patient care, however, there are specific peculiarities for implementing this tool in the Brazilian milieu.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to explore software development methods and quality assurance practices used by South Korean software industry. Empirical data was collected by conducting a survey that focused on three main parts: software life cycle models and methods, software quality assurance including quality standards, the strengths and weaknesses of South Korean software industry. The results of the completed survey showed that the use of agile methods is slightly surpassing the use of traditional software development methods. The survey also revealed an interesting result that almost half of the South Korean companies do not use any software quality assurance plan in their projects. For the state of South Korean software industry large number of the respondents thought that despite of the weakness, the status of software development in South Korea will improve in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The software Seed Vigor Imaging System (SVIS®), has been successfully used to evaluate seed physiological potential by automated analyses of scanned seedlings. In this research, the efficiency of this system was compared to other tests accepted for assessing cucumber (Cucumis sativus L.) seed vigor of distinct seed lots of Supremo and Safira cultivars. Seeds were subjected to germination, traditional and saturated salt accelerated aging, seedling emergence, seedling length and SVIS analyses (determination of vigor indices and seedling growth uniformity, lengths of primary root, hypocotyl and whole seedlings). It was also determined whether the definition of seedling growth/uniformity ratios affects the sensitivity of the SVIS®. Results showed that analyses SVIS have provided consistent identification of seed lots performance, and have produced information comparable to those from recommended seed vigor tests, thus demonstrating a suitable sensitivity for a rapid and objective evaluation of physiological potential of cucumber seeds. Analyses of four-days-old cucumber seedlings using the SVIS® are more accurate and growth/uniformity does not affect the precision of results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les systèmes logiciels sont devenus de plus en plus répondus et importants dans notre société. Ainsi, il y a un besoin constant de logiciels de haute qualité. Pour améliorer la qualité de logiciels, l’une des techniques les plus utilisées est le refactoring qui sert à améliorer la structure d'un programme tout en préservant son comportement externe. Le refactoring promet, s'il est appliqué convenablement, à améliorer la compréhensibilité, la maintenabilité et l'extensibilité du logiciel tout en améliorant la productivité des programmeurs. En général, le refactoring pourra s’appliquer au niveau de spécification, conception ou code. Cette thèse porte sur l'automatisation de processus de recommandation de refactoring, au niveau code, s’appliquant en deux étapes principales: 1) la détection des fragments de code qui devraient être améliorés (e.g., les défauts de conception), et 2) l'identification des solutions de refactoring à appliquer. Pour la première étape, nous traduisons des régularités qui peuvent être trouvés dans des exemples de défauts de conception. Nous utilisons un algorithme génétique pour générer automatiquement des règles de détection à partir des exemples de défauts. Pour la deuxième étape, nous introduisons une approche se basant sur une recherche heuristique. Le processus consiste à trouver la séquence optimale d'opérations de refactoring permettant d'améliorer la qualité du logiciel en minimisant le nombre de défauts tout en priorisant les instances les plus critiques. De plus, nous explorons d'autres objectifs à optimiser: le nombre de changements requis pour appliquer la solution de refactoring, la préservation de la sémantique, et la consistance avec l’historique de changements. Ainsi, réduire le nombre de changements permets de garder autant que possible avec la conception initiale. La préservation de la sémantique assure que le programme restructuré est sémantiquement cohérent. De plus, nous utilisons l'historique de changement pour suggérer de nouveaux refactorings dans des contextes similaires. En outre, nous introduisons une approche multi-objective pour améliorer les attributs de qualité du logiciel (la flexibilité, la maintenabilité, etc.), fixer les « mauvaises » pratiques de conception (défauts de conception), tout en introduisant les « bonnes » pratiques de conception (patrons de conception).