946 resultados para Clean Code
Clean Code vs Dirty Code : Ett fältexperiment för att förklara hur Clean Code påverkar kodförståelse
Resumo:
Stora och komplexa kodbaser med bristfällig kodförståelse är ett problem som blir allt vanligare bland företag idag. Bristfällig kodförståelse resulterar i längre tidsåtgång vid underhåll och modifiering av koden, vilket för ett företag leder till ökade kostnader. Clean Code anses enligt somliga vara lösningen på detta problem. Clean Code är en samling riktlinjer och principer för hur man skriver kod som är enkel att förstå och underhålla. Ett kunskapsglapp identifierades vad gäller empirisk data som undersöker Clean Codes påverkan på kodförståelse. Studiens frågeställning var: Hur påverkas förståelsen vid modifiering av kod som är refaktoriserad enligt Clean Code principerna för namngivning och att skriva funktioner? För att undersöka hur Clean Code påverkar kodförståelsen utfördes ett fältexperiment tillsammans med företaget CGM Lab Scandinavia i Borlänge, där data om tidsåtgång och upplevd förståelse hos testdeltagare samlades in och analyserades. Studiens resultat visar ingen tydlig förbättring eller försämring av kodförståelsen då endast den upplevda kodförståelsen verkar påverkas. Alla testdeltagare föredrar Clean Code framför Dirty Code även om tidsåtgången inte påverkas. Detta leder fram till slutsatsen att Clean Codes effekter kanske inte är omedelbara då utvecklare inte hunnit anpassa sig till Clean Code, och därför inte kan utnyttja det till fullo. Studien ger en fingervisning om Clean Codes potential att förbättra kodförståelsen.
Resumo:
Solving linear systems is an important problem for scientific computing. Exploiting parallelism is essential for solving complex systems, and this traditionally involves writing parallel algorithms on top of a library such as MPI. The SPIKE family of algorithms is one well-known example of a parallel solver for linear systems. The Hierarchically Tiled Array data type extends traditional data-parallel array operations with explicit tiling and allows programmers to directly manipulate tiles. The tiles of the HTA data type map naturally to the block nature of many numeric computations, including the SPIKE family of algorithms. The higher level of abstraction of the HTA enables the same program to be portable across different platforms. Current implementations target both shared-memory and distributed-memory models. In this thesis we present a proof-of-concept for portable linear solvers. We implement two algorithms from the SPIKE family using the HTA library. We show that our implementations of SPIKE exploit the abstractions provided by the HTA to produce a compact, clean code that can run on both shared-memory and distributed-memory models without modification. We discuss how we map the algorithms to HTA programs as well as examine their performance. We compare the performance of our HTA codes to comparable codes written in MPI as well as current state-of-the-art linear algebra routines.
Resumo:
"Information presented in this publication is intended to provide a general understanding of the statutory and regulatory requirements governing storm water. This information is not intended to replace, limit or expand upon the complete statutory and regulatory requirements found in the Illinois Environmental Protection Act and Title 35 of the Illinois Administrative Code."
Resumo:
Extraction and clean-up are essential points in polycyclic aromatic hydrocarbon (PAHs) analysis in a solid matrix. This work compares extraction techniques and clean-up procedures for PAH analysis. PAH levels, their toxicological significance and source were also evaluated in the waters of the Cocó and Ceará rivers. The efficiency of PAH recovery was higher for the soxhlet and ultrasonic techniques. PAH recovery varied from 69.3 to 99.3%. Total PAH concentration (ΣHPA) varied from 720.73 to 2234.76 µg kg-1 (Cocó river) and 96.4 to 1859.21 µg kg-1 (Ceará river). The main PAH sources are pyrolytic processes and the levels were classified as medium so that adverse effects are possible.
Resumo:
Methyl esters were prepared by the clean, one-step catalytic esterification of primary alcohols using molecular oxygen as a green oxidant and a newly developed SiO(2)-supported gold nanoparticle catalyst. The catalyst was highly active and selective in a broad range of pressure and temperature. At 3 atm O(2) and 130 degrees C benzyl alcohol was converted to methyl benzoate with 100% conversion and 100% selectivity in 4 h of reaction. This catalytic process is much ""greener"" than the conventional reaction routes because it avoids the use of stoichiometric environmentally unfriendly oxidants, usually required for alcohol oxidation, and the use of strong acids or excess of reactants or constant removal of products required to shift the equilibrium to the desired esterification product.
Resumo:
Use of activated charcoal and ion-exchange resin to cleaN up and concentrate enzymes in extracts from biodegraded wood. Ceriporiopsis subvermispora was used for the biodegradation of Eucalyptus grandis chips in the presence or absence of co-substrates (glucose and corn steep liquor) during 7, 14 and 28 days. Afterwards, the biodegraded chips were extracted with 50 mM sodium acetate buffer (pH 5.5) supplemented with 0.01% Tween 60. High activities of manganese peroxidases (MnPs) were observed in all the extracts, both in the absence (430, 765 and 896 UI kg(-1) respectively) and in the presence of co-substrates (1,013; 2,066 and 2,323 UI kg(-1) respectively). The extracts presented a high ratio between absorbances at 280 and 405 nm, indicating a strong abundance of aromatic compounds derived from lignin over heme-peroxidases. Adsorption into activated charcoal showed to be an adequate strategy to reduce the absorbance at 280 urn in all the extracts. Moreover, it allowed to maximize the capacity of an anion exchange resin bed (DEAE-Sepharose) used to concentrate the MnPs present in the extracts. It was concluded that the use of activated charcoal followed by adsorption into DEAE Sepharose is a strategy that can be used to concentrate MnPs in extracts obtained during the biodegradation of E. grandis by C. subvermispora.
Resumo:
This paper presents an investigation of design code provisions for steel-concrete composite columns. The study covers the national building codes of United States, Canada and Brazil, and the transnational EUROCODE. The study is based on experimental results of 93 axially loaded concrete-filled tubular steel columns. This includes 36 unpublished, full scale experimental results by the authors and 57 results from the literature. The error of resistance models is determined by comparing experimental results for ultimate loads with code-predicted column resistances. Regression analysis is used to describe the variation of model error with column slenderness and to describe model uncertainty. The paper shows that Canadian and European codes are able to predict mean column resistance, since resistance models of these codes present detailed formulations for concrete confinement by a steel tube. ANSI/AISC and Brazilian codes have limited allowance for concrete confinement, and become very conservative for short columns. Reliability analysis is used to evaluate the safety level of code provisions. Reliability analysis includes model error and other random problem parameters like steel and concrete strengths, and dead and live loads. Design code provisions are evaluated in terms of sufficient and uniform reliability criteria. Results show that the four design codes studied provide uniform reliability, with the Canadian code being best in achieving this goal. This is a result of a well balanced code, both in terms of load combinations and resistance model. The European code is less successful in providing uniform reliability, a consequence of the partial factors used in load combinations. The paper also shows that reliability indexes of columns designed according to European code can be as low as 2.2, which is quite below target reliability levels of EUROCODE. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents results on a verification test of a Direct Numerical Simulation code of mixed high-order of accuracy using the method of manufactured solutions (MMS). This test is based on the formulation of an analytical solution for the Navier-Stokes equations modified by the addition of a source term. The present numerical code was aimed at simulating the temporal evolution of instability waves in a plane Poiseuille flow. The governing equations were solved in a vorticity-velocity formulation for a two-dimensional incompressible flow. The code employed two different numerical schemes. One used mixed high-order compact and non-compact finite-differences from fourth-order to sixth-order of accuracy. The other scheme used spectral methods instead of finite-difference methods for the streamwise direction, which was periodic. In the present test, particular attention was paid to the boundary conditions of the physical problem of interest. Indeed, the verification procedure using MMS can be more demanding than the often used comparison with Linear Stability Theory. That is particularly because in the latter test no attention is paid to the nonlinear terms. For the present verification test, it was possible to manufacture an analytical solution that reproduced some aspects of an instability wave in a nonlinear stage. Although the results of the verification by MMS for this mixed-order numerical scheme had to be interpreted with care, the test was very useful as it gave confidence that the code was free of programming errors. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
The kinetics and mechanism of the thermal activation of peroxydisulfate, in the temperature range from 60 to 80 degrees C, was investigated in the presence and absence of sodium formate as an additive to turn the oxidizing capacity of the reaction mixture into a reductive one. Trichloroacetic acid, TCA, whose degradation by a reductive mechanism is well reported in the literature, was used as a probe. The chemistry of thermally activated peroxydisulfate is described by a reaction scheme involving free radical generation. The proposed mechanism is evaluated by a computer simulation of the concentration profiles obtained under different experimental conditions. In the presence of formate, SO(4)(center dot-) radicals yield CO(2)(center dot-), which are the main species available for degrading TCA. Under the latter conditions, TCA is more efficiently depleted than in the absence of formate, but otherwise identical conditions of temperature and [S(2)O(8)(2-)]. We therefore conclude that activated peroxydisulfate in the presence of formate as an additive is a convenient method for the mineralization of substrates that are refractory to oxidation. such as perchlorinated hydrocarbons and TCA. This method has the advantage that leaves no toxic residues. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel as it is simple to code and sufficient for practical engineering design problems. This also makes the code much more ‘user-friendly’ than structured grid approaches as the gridding process is done automatically. The CFD methodology relies on a finite-volume formulation of the unsteady Euler equations and is solved using a standard explicit Godonov (MUSCL) scheme. Both octree-based adaptive mesh refinement and shared-memory parallel processing capability have also been incorporated. For further details on the theory behind the code, see the companion report 2007/12.
Resumo:
Models of warped extra dimensions with custodial symmetry usually predict the existence of a light Kaluza-Klein fermion arising as a partner of the right-handed top quark, sometimes called light custodians which we will denote (b) over tilde (R). The production of these particles at the LHC can give rise to multi-W events which could be observed in same-sign dilepton channels, but its mass reconstruction is challenging. In this paper we study the possibility of finding a signal for the pair production of this new particle at the LHC focusing on a rarer, but cleaner decay mode of a light custodian into a Z boson and a b-quark. In this mode it would be possible to reconstruct the light custodian mass. In addition to the dominant standard model QCD production processes, we include the contribution of a Kaluza-Klein gluon first mode. We find that (b) over tilde (R) stands out from the background as a peak in the bZ invariant mass. However, when taking into account only the electronic and muonic decay modes of the Z boson and b-tagging efficiencies, the LHC will have access only to the very light range of masses, m((b) over tilde) = O(500) GeV.
Resumo:
Clinicians working in the field of congenital and paediatric cardiology have long felt the need for a common diagnostic and therapeutic nomenclature and coding system with which to classify patients of all ages with congenital and acquired cardiac disease. A cohesive and comprehensive system of nomenclature, suitable for setting a global standard for multicentric analysis of outcomes and stratification of risk, has only recently emerged, namely, The International Paediatric and Congenital Cardiac Code. This review, will give an historical perspective on the development of systems of nomenclature in general, and specifically with respect to the diagnosis and treatment of patients with paediatric and congenital cardiac disease. Finally, current and future efforts to merge such systems into the paperless environment of the electronic health or patient record on a global scale are briefly explored. On October 6, 2000, The International Nomenclature Committee for Pediatric and Congenital Heart Disease was established. In January, 2005, the International Nomenclature Committee was constituted in Canada as The International Society for Nomenclature of Paediatric and Congenital Heart Disease. This International Society now has three working groups. The Nomenclature Working Group developed The International Paediatric and Congenital Cardiac Code and will continue to maintain, expand, update, and preserve this International Code. It will also provide ready access to the International Code for the global paediatric and congenital cardiology and cardiac surgery communities, related disciplines, the healthcare industry, and governmental agencies, both electronically and in published form. The Definitions Working Group will write definitions for the terms in the International Paediatric and Congenital Cardiac Code, building on the previously published definitions from the Nomenclature Working Group. The Archiving Working Group, also known as The Congenital Heart Archiving Research Team, will link images and videos to the International Paediatric and Congenital Cardiac Code. The images and videos will be acquired from cardiac morphologic specimens and imaging modalities such as echocardiography, angiography, computerized axial tomography and magnetic resonance imaging, as well as intraoperative images and videos. Efforts are ongoing to expand the usage of The International Paediatric and Congenital Cardiac Code to other areas of global healthcare. Collaborative efforts are under-way involving the leadership of The International Nomenclature Committee for Pediatric and Congenital Heart Disease and the representatives of the steering group responsible for the creation of the 11th revision of the International Classification of Diseases, administered by the World Health Organisation. Similar collaborative efforts are underway involving the leadership of The International Nomenclature Committee for Pediatric and Congenital Heart Disease and the International Health Terminology Standards Development Organisation, who are the owners of the Systematized Nomenclature of Medicine or ""SNOMED"". The International Paediatric and Congenital Cardiac Code was created by specialists in the field to name and classify paediatric and congenital cardiac disease and its treatment. It is a comprehensive code that can be freely downloaded from the internet (http://www.IPCCC.net) and is already in use worldwide, particularly for international comparisons of outcomes. The goal of this effort is to create strategies for stratification of risk and to improve healthcare for the individual patient. The collaboration with the World Heath Organization, the International Health Terminology Standards Development Organisation, and the healthcare Industry, will lead to further enhancement of the International Code, and to Its more universal use.