906 resultados para software quality


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To describe the electronic medical databases used in antiretroviral therapy (ART) programmes in lower-income countries and assess the measures such programmes employ to maintain and improve data quality and reduce the loss of patients to follow-up. METHODS: In 15 countries of Africa, South America and Asia, a survey was conducted from December 2006 to February 2007 on the use of electronic medical record systems in ART programmes. Patients enrolled in the sites at the time of the survey but not seen during the previous 12 months were considered lost to follow-up. The quality of the data was assessed by computing the percentage of missing key variables (age, sex, clinical stage of HIV infection, CD4+ lymphocyte count and year of ART initiation). Associations between site characteristics (such as number of staff members dedicated to data management), measures to reduce loss to follow-up (such as the presence of staff dedicated to tracing patients) and data quality and loss to follow-up were analysed using multivariate logit models. FINDINGS: Twenty-one sites that together provided ART to 50 060 patients were included (median number of patients per site: 1000; interquartile range, IQR: 72-19 320). Eighteen sites (86%) used an electronic database for medical record-keeping; 15 (83%) such sites relied on software intended for personal or small business use. The median percentage of missing data for key variables per site was 10.9% (IQR: 2.0-18.9%) and declined with training in data management (odds ratio, OR: 0.58; 95% confidence interval, CI: 0.37-0.90) and weekly hours spent by a clerk on the database per 100 patients on ART (OR: 0.95; 95% CI: 0.90-0.99). About 10 weekly hours per 100 patients on ART were required to reduce missing data for key variables to below 10%. The median percentage of patients lost to follow-up 1 year after starting ART was 8.5% (IQR: 4.2-19.7%). Strategies to reduce loss to follow-up included outreach teams, community-based organizations and checking death registry data. Implementation of all three strategies substantially reduced losses to follow-up (OR: 0.17; 95% CI: 0.15-0.20). CONCLUSION: The quality of the data collected and the retention of patients in ART treatment programmes are unsatisfactory for many sites involved in the scale-up of ART in resource-limited settings, mainly because of insufficient staff trained to manage data and trace patients lost to follow-up.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In-cylinder pressure transducers have been used for decades to record combustion pressure inside a running engine. However, due to the extreme operating environment, transducer design and installation must be considered in order to minimize measurement error. One such error is caused by thermal shock, where the pressure transducer experiences a high heat flux that can distort the pressure transducer diaphragm and also change the crystal sensitivity. This research focused on investigating the effects of thermal shock on in-cylinder pressure transducer data quality using a 2.0L, four-cylinder, spark-ignited, direct-injected, turbo-charged GM engine. Cylinder four was modified with five ports to accommodate pressure transducers of different manufacturers. They included an AVL GH14D, an AVL GH15D, a Kistler 6125C, and a Kistler 6054AR. The GH14D, GH15D, and 6054AR were M5 size transducers. The 6125C was a larger, 6.2mm transducer. Note that both of the AVL pressure transducers utilized a PH03 flame arrestor. Sweeps of ignition timing (spark sweep), engine speed, and engine load were performed to study the effects of thermal shock on each pressure transducer. The project consisted of two distinct phases which included experimental engine testing as well as simulation using a commercially available software package. A comparison was performed to characterize the quality of the data between the actual cylinder pressure and the simulated results. This comparison was valuable because the simulation results did not include thermal shock effects. All three sets of tests showed the peak cylinder pressure was basically unaffected by thermal shock. Comparison of the experimental data with the simulated results showed very good correlation. The spark sweep was performed at 1300 RPM and 3.3 bar NMEP and showed that the differences between the simulated results (no thermal shock) and the experimental data for the indicated mean effective pressure (IMEP) and the pumping mean effective pressure (PMEP) were significantly less than the published accuracies. All transducers had an IMEP percent difference less than 0.038% and less than 0.32% for PMEP. Kistler and AVL publish that the accuracy of their pressure transducers are within plus or minus 1% for the IMEP (AVL 2011; Kistler 2011). In addition, the difference in average exhaust absolute pressure between the simulated results and experimental data was the greatest for the two Kistler pressure transducers. The location and lack of flame arrestor are believed to be the cause of the increased error. For the engine speed sweep, the torque output was held constant at 203 Nm (150 ft-lbf) from 1500 to 4000 RPM. The difference in IMEP was less than 0.01% and the PMEP was less than 1%, except for the AVL GH14D which was 5% and the AVL GH15DK which was 2.25%. A noticeable error in PMEP appeared as the load increased during the engine speed sweeps, as expected. The load sweep was conducted at 2000 RPM over a range of NMEP from 1.1 to 14 bar. The difference in IMEP values were less 0.08% while the PMEP values were below 1% except for the AVL GH14D which was 1.8% and the AVL GH15DK which was at 1.25%. In-cylinder pressure transducer data quality was effectively analyzed using a combination of experimental data and simulation results. Several criteria can be used to investigate the impact of thermal shock on data quality as well as determine the best location and thermal protection for various transducers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the realm of computer programming, the experience of writing a program is used to reinforce concepts and evaluate ability. This research uses three case studies to evaluate the introduction of testing through Kolb's Experiential Learning Model (ELM). We then analyze the impact of those testing experiences to determine methods for improving future courses. The first testing experience that students encounter are unit test reports in their early courses. This course demonstrates that automating and improving feedback can provide more ELM iterations. The JUnit Generation (JUG) tool also provided a positive experience for the instructor by reducing the overall workload. Later, undergraduate and graduate students have the opportunity to work together in a multi-role Human-Computer Interaction (HCI) course. The interactions use usability analysis techniques with graduate students as usability experts and undergraduate students as design engineers. Students get experience testing the user experience of their product prototypes using methods varying from heuristic analysis to user testing. From this course, we learned the importance of the instructors role in the ELM. As more roles were added to the HCI course, a desire arose to provide more complete, quality assured software. This inspired the addition of unit testing experiences to the course. However, we learned that significant preparations must be made to apply the ELM when students are resistant. The research presented through these courses was driven by the recognition of a need for testing in a Computer Science curriculum. Our understanding of the ELM suggests the need for student experience when being introduced to testing concepts. We learned that experiential learning, when appropriately implemented, can provide benefits to the Computer Science classroom. When examined together, these course-based research projects provided insight into building strong testing practices into a curriculum.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: A multi-centre study has been conducted, during 2005, by means of a questionnaire posted on the Italian Society of Emergency Medicine (SIMEU) web page. Our intention was to carry out an organisational and functional analysis of Italian Emergency Departments (ED) in order to pick out some macro-indicators of the activities performed. Participation was good, in that 69 ED (3,285,440 admissions to emergency services) responded to the questionnaire. METHODS: The study was based on 18 questions: 3 regarding the personnel of the ED, 2 regarding organisational and functional aspects, 5 on the activity of the ED, 7 on triage and 1 on the assessment of the quality perceived by the users of the ED. RESULTS AND CONCLUSION: The replies revealed that 91.30% of the ED were equipped with data-processing software, which, in 96.83% of cases, tracked the entire itinerary of the patient. About 48,000 patients/year used the ED: 76.72% were discharged and 18.31% were hospitalised. Observation Units were active in 81.16% of the ED examined. Triage programmes were in place in 92.75% of ED: in 75.81% of these, triage was performed throughout the entire itinerary of the patient; in 16.13% it was performed only symptom-based, and in 8.06% only on-call. Of the patients arriving at the ED, 24.19% were assigned a non-urgent triage code, 60.01% a urgent code, 14.30% a emergent code and 1.49% a life-threatening code. Waiting times were: 52.39 min for non-urgent patients, 40.26 min for urgent, 12.08 for emergent, and 1.19 for life-threatening patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When project managers determine schedules for resource-constrained projects, they commonly use commercial project management software packages. Which resource-allocation methods are implemented in these packages is proprietary information. The resource-allocation problem is in general computationally difficult to solve to optimality. Hence, the question arises if and how various project management software packages differ in quality with respect to their resource-allocation capabilities. None of the few existing papers on this subject uses a sizeable data set and recent versions of common software packages. We experimentally analyze the resource-allocation capabilities of Acos Plus.1, AdeptTracker Professional, CS Project Professional, Microsoft Office Project 2007, Primavera P6, Sciforma PS8, and Turbo Project Professional. Our analysis is based on 1560 instances of the precedence- and resource-constrained project scheduling problem RCPSP. The experiment shows that using the resource-allocation feature of these packages may lead to a project duration increase of almost 115% above the best known feasible schedule. The increase gets larger with increasing resource scarcity and with increasing number of activities. We investigate the impact of different complexity scenarios and priority rules on the project duration obtained by the software packages. We provide a decision table to support managers in selecting a software package and a priority rule.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Enterprise Applications are complex software systems that manipulate much persistent data and interact with the user through a vast and complex user interface. In particular applications written for the Java 2 Platform, Enterprise Edition (J2EE) are composed using various technologies such as Enterprise Java Beans (EJB) or Java Server Pages (JSP) that in turn rely on languages other than Java, such as XML or SQL. In this heterogeneous context applying existing reverse engineering and quality assurance techniques developed for object-oriented systems is not enough. Because those techniques have been created to measure quality or provide information about one aspect of J2EE applications, they cannot properly measure the quality of the entire system. We intend to devise techniques and metrics to measure quality in J2EE applications considering all their aspects and to aid their evolution. Using software visualization we also intend to inspect to structure of J2EE applications and all other aspects that can be investigate through this technique. In order to do that we also need to create a unified meta-model including all elements composing a J2EE application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article introduces the E-learning Circle, a tool developed to assure the quality of the software design process of e-learning systems, considering pedagogical principles as well as technology. The E-learning Circle consists of a number of concentric circles which are divided into three sectors. The content of the inner circles is based on pedagogical principles, while the outer circle specifies how the pedagogical principles may be implemented with technology. The circle’s centre is dedicated to the subject taught, ensuring focus on the specific subject’s properties. The three sectors represent the student, the teacher and the learning objectives. The strengths of the E-learning Circle are the compact presentation combined with the overview it provides, as well as the usefulness of a design tool dealing with complexity, providing a common language and embedding best practice. The E-learning Circle is not a prescriptive method, but is useful in several design models and processes. The article presents two projects where the E-learning Circle was used as a design tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is a summary of the main contribu- tions of the PhD thesis published in [1]. The main research contributions of the thesis are driven by the research question how to design simple, yet efficient and robust run-time adaptive resource allocation schemes within the commu- nication stack of Wireless Sensor Network (WSN) nodes. The thesis addresses several problem domains with con- tributions on different layers of the WSN communication stack. The main contributions can be summarized as follows: First, a a novel run-time adaptive MAC protocol is intro- duced, which stepwise allocates the power-hungry radio interface in an on-demand manner when the encountered traffic load requires it. Second, the thesis outlines a metho- dology for robust, reliable and accurate software-based energy-estimation, which is calculated at network run- time on the sensor node itself. Third, the thesis evaluates several Forward Error Correction (FEC) strategies to adap- tively allocate the correctional power of Error Correcting Codes (ECCs) to cope with timely and spatially variable bit error rates. Fourth, in the context of TCP-based communi- cations in WSNs, the thesis evaluates distributed caching and local retransmission strategies to overcome the perfor- mance degrading effects of packet corruption and trans- mission failures when transmitting data over multiple hops. The performance of all developed protocols are eval- uated on a self-developed real-world WSN testbed and achieve superior performance over selected existing ap- proaches, especially where traffic load and channel condi- tions are suspect to rapid variations over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we follow a theory-based approach to study the assimilation of compliance software in highly regulated multinational enterprises. These relatively new software products support the automation of controls which are associated with mandatory compliance requirements. We use institutional and success factor theories to explain the assimilation of compliance software. A framework for analyzing the assimilation of Access Control Systems (ACS), a special type of compliance software, is developed and used to reflect the experiences obtained in four in-depth case studies. One result is that coercive, mimetic, and normative pressures significantly effect ACS assimilation. On the other hand, quality aspects have only a moderate impact at the beginning of the assimilation process, in later phases the impact may increase if performance and improvement objectives become more relevant. In addition, it turns out that position of the enterprises and compatibility heavily influence the assimilation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction. Lake Houston serves as a reservoir for both recreational and drinking water for residents of Houston, Texas, and the metropolitan area. The Texas Commission on Environmental Quality (TCEQ) expressed concerns about the water quality and increasing amounts of pathogenic bacteria in Lake Houston (3). The objective of this investigation is to evaluate water quality for the presence of bacteria, nitrates, nitrites, carbon, phosphorus, dissolved oxygen, pH, turbidity, suspended solids, dissolved solids, and chlorine in Cypress Creek. The aims of this project are to analyze samples of water from Cypress Creek and to render a quantitative and graphical representation of the results. The collected information will allow for a better understanding of the aqueous environment in Cypress Creek.^ Methods. Water samples were collected in August 2009 and analyzed in the field and at UTSPH laboratory by spectrophotometry and other methods. Mapping software was utilized to develop novel maps of the sample sites using coordinates attained with the Global Positioning System (GPS). Sample sites and concentrations were mapped using Geographic Information System (GIS) software and correlated with permitted outfalls and other land use characteristic.^ Results. All areas sampled were positive for the presence of total coliform and Escherichia coli (E. coli). The presences of other water contaminants varied at each location in Cypress Creek but were under the maximum allowable limits designated by the Texas Commission on Environmental Quality. However, dissolved oxygen concentrations were elevated above the TCEQ limit of 5.0 mg/L at majority of the sites. One site had near-limit concentration of nitrates at 9.8 mg/L. Land use above this site included farm land, agricultural land, golf course, parks, residential neighborhoods, and nine permitted TCEQ effluent discharge sites within 0.5 miles upstream.^ Significance. Lake Houston and its tributary, Cypress Creek, are used as recreational waters where individuals may become exposed to microbial contamination. Lake Houston also is the source of drinking water for much of Houston/Harris and Galveston Counties. This research identified the presence of microbial contaminates in Cypress Creek above TCEQ regulatory requirements. Other water quality variables measured were in line with TCEQ regulations except for near-limit for nitrate at sample site #10, at Jarvis and Timberlake in Cypress Texas.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The BSRN Toolbox is a software package supplied by the WRMC and is freely available to all station scientists and data users. The main features of the package include a download manager for Station- to-Archive files, a tool to convert files into human readable TAB-separated ASCII-tables (similar to those output by the PANGAEA database), and a tool to check data sets for violations of the "BSRN Global Network recommended QC tests, V2.0" quality criteria. The latter tool creates quality codes, one per measured value, indicating if the data are "physically possible," "extremely rare," or if "intercomparison limits are exceeded." In addition, auxiliary data such as solar zenith angle or global calculated from diffuse and direct can be output. All output from the QC tool can be visualized using PanPlot (doi:10.1594/PANGAEA.816201).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Usability is the capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions. Many studies demonstrate the benefits of usability, yet to this day software products continue to exhibit consistently low levels of this quality attribute. Furthermore, poor usability in software systems contributes largely to software failing in actual use. One of the main disciplines involved in usability is that of Human-Computer Interaction (HCI). Over the past two decades the HCI community has proposed specific features that should be present in applications to improve their usability, yet incorporating them into software continues to be far from trivial for software developers. These difficulties are due to multiple factors, including the high level of abstraction at which these HCI recommendations are made and how far removed they are from actual software implementation. In order to bridge this gap, the Software Engineering community has long proposed software design solutions to help developers include usability features into software, however, the problem remains an open research question. This doctoral thesis addresses the problem of helping software developers include specific usability features into their applications by providing them with a structured and tangible guidance in the form of a process, which we have termed the Usability-Oriented Software Development Process. This process is supported by a set of Software Usability Guidelines that help developers to incorporate a set of eleven usability features with high impact on software design. After developing the Usability-oriented Software Development Process and the Software Usability Guidelines, they have been validated across multiple academic projects and proven to help software developers to include such usability features into their software applications. In doing so, their use significantly reduced development time and improved the quality of the resulting designs of these projects. Furthermore, in this work we propose a software tool to automate the application of the proposed process. In sum, this work contributes to the integration of the Software Engineering and HCI disciplines providing a framework that helps software developers to create usable applications in an efficient way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For years, the Human Computer Interaction (HCI) community has crafted usability guidelines that clearly define what characteristics a software system should have in order to be easy to use. However, in the Software Engineering (SE) community keep falling short of successfully incorporating these recommendations into software projects. From a SE perspective, the process of incorporating usability features into software is not always straightforward, as a large number of these features have heavy implications in the underlying software architecture. For example, successfully including an “undo” feature in an application requires the design and implementation of many complex interrelated data structures and functionalities. Our work is focused upon providing developers with a set of software design patterns to assist them in the process of designing more usable software. This would contribute to the proper inclusion of specific usability features with high impact on the software design. Preliminary validation data show that usage of the guidelines also has positive effects on development time and overall software design quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality assessment is one of the activities performed as part of systematic literature reviews. It is commonly accepted that a good quality experiment is bias free. Bias is considered to be related to internal validity (e.g., how adequately the experiment is planned, executed and analysed). Quality assessment is usually conducted using checklists and quality scales. It has not yet been proven;however, that quality is related to experimental bias. Aim: Identify whether there is a relationship between internal validity and bias in software engineering experiments. Method: We built a quality scale to determine the quality of the studies, which we applied to 28 experiments included in two systematic literature reviews. We proposed an objective indicator of experimental bias, which we applied to the same 28 experiments. Finally, we analysed the correlations between the quality scores and the proposed measure of bias. Results: We failed to find a relationship between the global quality score (resulting from the quality scale) and bias; however, we did identify interesting correlations between bias and some particular aspects of internal validity measured by the instrument. Conclusions: There is an empirically provable relationship between internal validity and bias. It is feasible to apply quality assessment in systematic literature reviews, subject to limits on the internal validity aspects for consideration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.