998 resultados para Code review
Resumo:
La révision du code est un procédé essentiel quelque soit la maturité d'un projet; elle cherche à évaluer la contribution apportée par le code soumis par les développeurs. En principe, la révision du code améliore la qualité des changements de code (patches) avant qu'ils ne soient validés dans le repertoire maître du projet. En pratique, l'exécution de ce procédé n'exclu pas la possibilité que certains bugs passent inaperçus. Dans ce document, nous présentons une étude empirique enquétant la révision du code d'un grand projet open source. Nous investissons les relations entre les inspections des reviewers et les facteurs, sur les plans personnel et temporel, qui pourraient affecter la qualité de telles inspections.Premiérement, nous relatons une étude quantitative dans laquelle nous utilisons l'algorithme SSZ pour détecter les modifications et les changements de code favorisant la création de bogues (bug-inducing changes) que nous avons lié avec l'information contenue dans les révisions de code (code review information) extraites du systéme de traçage des erreurs (issue tracking system). Nous avons découvert que les raisons pour lesquelles les réviseurs manquent certains bogues était corrélées autant à leurs caractéristiques personnelles qu'aux propriétés techniques des corrections en cours de revue. Ensuite, nous relatons une étude qualitative invitant les développeurs de chez Mozilla à nous donner leur opinion concernant les attributs favorables à la bonne formulation d'une révision de code. Les résultats de notre sondage suggèrent que les développeurs considèrent les aspects techniques (taille de la correction, nombre de chunks et de modules) autant que les caractéristiques personnelles (l'expérience et review queue) comme des facteurs influant fortement la qualité des revues de code.
Resumo:
Opinnäytetyö etsii korrelaatiota ohjelmistomittauksella saavutettujen tulosten ja ohjelmasta löytyneiden virheiden väliltä. Työssä käytetään koeryhmänä jo olemassaolevia ohjelmistoja. Työ tutkii olisiko ohjelmistomittareita käyttämällä ollut mahdollista paikallistaa ohjelmistojen ongelmakohdat ja näin saada arvokasta tietoa ohjelmistokehitykseen. Mittausta voitaisiin käyttää resurssien parempaan kohdentamiseen koodikatselmuksissa, koodi-integraatiossa, systeemitestauksessa ja aikataulutuksessa. Mittaamisen avulla nämä tehtävät saisivat enemmän tietoa resurssien kohdistamiseen. Koeryhmänä käytetään erilaisia ohjelmistotuotteita. Yhteistä näille kaikille tuotteille on niiden peräkkäiset julkaisut. Uutta julkaisua tehtäessä, edellistä julkaisua käytetään pohjana, jonka päällekehitetään uutta lähdekoodia. Tämän takia ohjelmistomittauksessa pitää pystyä erottelemaan edellisen julkaisun lähdekoodi uudesta lähdekoodista. Työssä käytettävät ohjelmistomittarit ovat yleisiä ja ohjelmistotekniikassalaajasti käytettyjä mittaamaan erilaisia lähdekoodin ominaisuuksia, joiden arvellaan vaikuttavan virhealttiuteen. Tämän työn tarkoitus on tutkia näiden ohjelmistomittareiden käytettävyyttä koeryhmänä toimivissa ohjelmistoympäristöissä. Käytännön osuus työstä onnistui löytämään korrelaation joidenkinohjelmistomittareiden ja virheiden väliltä, samalla kuin toiset ohjelmistomittarit eivät antaneet vakuuttavia tuloksia. Ohjelmistomittareita käyttämällä näyttää olevan mahdollista tunnistaa virhealttiit kohdat ohjelmasta ja siten parantaa ohjelmistokehityksen tehokkuutta. Ohjelmistomittareiden käyttö tuotekehityksessäon perusteltavaa ja niiden avulla mahdollisesti pystyttäisiin vaikuttamaan ohjelmiston laatuun tulevissa julkaisuissa.
Resumo:
An accepted fact in software engineering is that software must undergo verification and validation process during development to ascertain and improve its quality level. But there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products. Though, some knowledge is available on the strengths and weaknesses of the available software quality assurance techniques but not much is known yet on the relationship between different techniques and contextual behavior of the techniques. Objective: This research investigates the effectiveness of two testing techniques ? equivalence class partitioning and decision coverage and one review technique ? code review by abstraction, in terms of their fault detection capability. This will be used to strengthen the practical knowledge available on these techniques.
Resumo:
The software implementation of the emergency shutdown feature in a major radiotherapy system was analyzed, using a directed form of code review based on module dependences. Dependences between modules are labelled by particular assumptions; this allows one to trace through the code, and identify those fragments responsible for critical features. An `assumption tree' is constructed in parallel, showing the assumptions which each module makes about others. The root of the assumption tree is the critical feature of interest, and its leaves represent assumptions which, if not valid, might cause the critical feature to fail. The analysis revealed some unexpected assumptions that motivated improvements to the code.
Resumo:
IPH responded to the Department of Justice, Equality and Defence review of the voluntary Code of Practice for the display and sale of alcohol in supermarkets, convenience stores and similar mixed trading outlets. The voluntary Code was introduced in 2008 as an alternative to the statutory rules for structural separation of alcohol products in mixed trading outlets which are set out in section 9 of the Intoxicating Liquor Act 2008. Interested bodies and individuals were invited to submit comments on the Compliance Report for 2011 and on the effectiveness of the voluntary approach to structural separation by 20th December 2011. The Minister said he intended to also seek the views of the Minister for Health and the Joint Oireachtas Committee on Justice, Defence and Equality before reaching any decision on whether to bring the statutory rules in the 2008 Act into operation.
Resumo:
The "50 States Project" is the name given to President Ronald D. Reagan;s 1981 pledge to encourage the fifty governors to initiate individual state projects to review their state Codes for unequal treatment of persons based upon sex. We believe that Iowa is the first state to complete this project. Project efforts in Iowa began in June of 1981, when the Governor Robert D. ray appointed Dr. Patricia L. Geadelmann, Chairperson on the Iowa commission on the Status of Women, as Iowa's 50 State Project representative. A 50 States planning committee was formed consisting of members from the Governor Ray's staff, the Iowa Commission on the Status of Women, and the Iowa Legislature. Various alternatives for reviewing the Iowa code and the Iowa Administrative Rules were studied and recommendations of the group were reported to Governor Terry E. Branstad prior to his inauguration.
Resumo:
Digital technologies have profoundly changed not only the ways we create, distribute, access, use and re-use information but also many of the governance structures we had in place. Overall, "older" institutions at all governance levels have grappled and often failed to master the multi-faceted and multi-directional issues of the Internet. Regulatory entrepreneurs have yet to discover and fully mobilize the potential of digital technologies as an influential factor impacting upon the regulability of the environment and as a potential regulatory tool in themselves. At the same time, we have seen a deterioration of some public spaces and lower prioritization of public objectives, when strong private commercial interests are at play, such as most tellingly in the field of copyright. Less tangibly, private ordering has taken hold and captured through contracts spaces, previously regulated by public law. Code embedded in technology often replaces law. Non-state action has in general proliferated and put serious pressure upon conventional state-centered, command-and-control models. Under the conditions of this "messy" governance, the provision of key public goods, such as freedom of information, has been made difficult or is indeed jeopardized.The grand question is how can we navigate this complex multi-actor, multi-issue space and secure the attainment of fundamental public interest objectives. This is also the question that Ian Brown and Chris Marsden seek to answer with their book, Regulating Code, as recently published under the "Information Revolution and Global Politics" series of MIT Press. This book review critically assesses the bold effort by Brown and Marsden.
Resumo:
Mode of access: Internet.
Resumo:
"HHP-25/11-83(2M)E"--P. [4] of cover.
Resumo:
In the article, we have reviewed the means for visualization of syntax, semantics and source code for programming languages which support procedural and/or object-oriented paradigm. It is examined how the structure of the source code of the structural and object-oriented programming styles has influenced different approaches for their teaching. We maintain a thesis valid for the object-oriented programming paradigm, which claims that the activities for design and programming of classes are done by the same specialist, and the training of this specialist should include design as well as programming skills and knowledge for modeling of abstract data structures. We put the question how a high level of abstraction in the object-oriented paradigm should be presented in simple model in the design stage, so the complexity in the programming stage stay low and be easily learnable. We give answer to this question, by building models using the UML notation, as we take a concrete example from the teaching practice including programming techniques for inheritance and polymorphism.
Resumo:
Clinicians working in the field of congenital and paediatric cardiology have long felt the need for a common diagnostic and therapeutic nomenclature and coding system with which to classify patients of all ages with congenital and acquired cardiac disease. A cohesive and comprehensive system of nomenclature, suitable for setting a global standard for multicentric analysis of outcomes and stratification of risk, has only recently emerged, namely, The International Paediatric and Congenital Cardiac Code. This review, will give an historical perspective on the development of systems of nomenclature in general, and specifically with respect to the diagnosis and treatment of patients with paediatric and congenital cardiac disease. Finally, current and future efforts to merge such systems into the paperless environment of the electronic health or patient record on a global scale are briefly explored. On October 6, 2000, The International Nomenclature Committee for Pediatric and Congenital Heart Disease was established. In January, 2005, the International Nomenclature Committee was constituted in Canada as The International Society for Nomenclature of Paediatric and Congenital Heart Disease. This International Society now has three working groups. The Nomenclature Working Group developed The International Paediatric and Congenital Cardiac Code and will continue to maintain, expand, update, and preserve this International Code. It will also provide ready access to the International Code for the global paediatric and congenital cardiology and cardiac surgery communities, related disciplines, the healthcare industry, and governmental agencies, both electronically and in published form. The Definitions Working Group will write definitions for the terms in the International Paediatric and Congenital Cardiac Code, building on the previously published definitions from the Nomenclature Working Group. The Archiving Working Group, also known as The Congenital Heart Archiving Research Team, will link images and videos to the International Paediatric and Congenital Cardiac Code. The images and videos will be acquired from cardiac morphologic specimens and imaging modalities such as echocardiography, angiography, computerized axial tomography and magnetic resonance imaging, as well as intraoperative images and videos. Efforts are ongoing to expand the usage of The International Paediatric and Congenital Cardiac Code to other areas of global healthcare. Collaborative efforts are under-way involving the leadership of The International Nomenclature Committee for Pediatric and Congenital Heart Disease and the representatives of the steering group responsible for the creation of the 11th revision of the International Classification of Diseases, administered by the World Health Organisation. Similar collaborative efforts are underway involving the leadership of The International Nomenclature Committee for Pediatric and Congenital Heart Disease and the International Health Terminology Standards Development Organisation, who are the owners of the Systematized Nomenclature of Medicine or ""SNOMED"". The International Paediatric and Congenital Cardiac Code was created by specialists in the field to name and classify paediatric and congenital cardiac disease and its treatment. It is a comprehensive code that can be freely downloaded from the internet (http://www.IPCCC.net) and is already in use worldwide, particularly for international comparisons of outcomes. The goal of this effort is to create strategies for stratification of risk and to improve healthcare for the individual patient. The collaboration with the World Heath Organization, the International Health Terminology Standards Development Organisation, and the healthcare Industry, will lead to further enhancement of the International Code, and to Its more universal use.
Resumo:
Objective: To develop a 'quality use of medicines' coding system for the assessment of pharmacists' medication reviews and to apply it to an appropriate cohort. Method: A 'quality use of medicines' coding system was developed based on findings in the literature. These codes were then applied to 216 (111 intervention, 105 control) veterans' medication profiles by an independent clinical pharmacist who was supported by a clinical pharmacologist with the aim to assess the appropriateness of pharmacy interventions. The profiles were provided for veterans participating in a randomised, controlled trial in private hospitals evaluating the effect of medication review and discharge counselling. The reliability of the coding was tested by two independent clinical pharmacists in a random sample of 23 veterans from the study population. Main outcome measure: Interrater reliability was assessed by applying Cohen's kappa score on aggregated codes. Results: The coding system based on the literature consisted of 19 codes. The results from the three clinical pharmacists suggested that the original coding system had two major problems: (a) a lack of discrimination for certain recommendations e. g. adverse drug reactions, toxicity and mortality may be seen as variations in degree of a single effect and (b) certain codes e. g. essential therapy were in low prevalence. The interrater reliability for an aggregation of all codes into positive, negative and clinically non-significant codes ranged from 0.49-0.58 (good to fair). The interrater reliability increased to 0.72-0.79 (excellent) when all negative codes were excluded. Analysis of the sample of 216 profiles showed that the most prevalent recommendations from the clinical pharmacists were a positive impact in reducing adverse responses (31.9%), an improvement in good clinical pharmacy practice (25.5%) and a positive impact in reducing drug toxicity (11.1%). Most medications were assigned the clinically non-significant code (96.6%). In fact, the interventions led to a statistically significant difference in pharmacist recommendations in the categories; adverse response, toxicity and good clinical pharmacy practice measured by the quality use of medicine coding system. Conclusion: It was possible to use the quality use of medicine coding system to rate the quality and potential health impact of pharmacists' medication reviews, and the system did pick up differences between intervention and control patients. The interrater reliability for the summarised coding system was fair, but a larger sample of medication regimens is needed to assess the non-summarised quality use of medicines coding system.
Resumo:
The tumour necrosis factor (TNF) family members B cell activating factor (BAFF) and APRIL (a proliferation-inducing ligand) are crucial survival factors for peripheral B cells. An excess of BAFF leads to the development of autoimmune disorders in animal models, and high levels of BAFF have been detected in the serum of patients with various autoimmune conditions. In this Review, we consider the possibility that in mice autoimmunity induced by BAFF is linked to T cell-independent B cell activation rather than to a severe breakdown of B cell tolerance. We also outline the mechanisms of BAFF signalling, the impact of ligand oligomerization on receptor activation and the progress of BAFF-depleting agents in the clinical setting.
Resumo:
The Broadcasting Authority of Ireland (BAI) is an independent statutory organisation responsible for certain aspects of television and radio services in Ireland, guided by the Broadcasting Act 2009. The BAI are undertaking a review of the Children’s Commercial Communications Code section 11 rules on Diet and Nutrition. This section sets down standards with which commercial communications for food and drink shown during children’s programmes and/or where these communications are for food and drink products or services that are of special interest to children.