984 resultados para Code-centric development
Resumo:
The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.
Resumo:
Emergency vehicles use high-amplitude sirens to warn pedestrians and other road users of their presence. Unfortunately, the siren noise enters the vehicle and corrupts the intelligibility of two-way radio voice com-munications from the emergency vehicle to a control room. Often the siren has to be turned off to enable the control room to hear what is being said which subsequently endangers people's lives. A digital signal processing (DSP) based system for the cancellation of siren noise embedded within speech is presented. The system has been tested with the least mean square (LMS), normalised least mean square (NLMS) and affine projection algorithm (APA) using recordings from three common types of sirens (two-tone, wail and yelp) from actual test vehicles. It was found that the APA with a projection order of 2 gives comparably improved cancellation over the LMS and NLMS with only a moderate increase in algorithm complexity and code size. Therefore, this siren noise cancellation system using the APA offers an improvement in cancellation achieved by previous systems. The removal of the siren noise improves the response time for the emergency vehicle and thus the system can contribute to saving lives. The system also allows voice communication to take place even when the siren is on and as such the vehicle offers less risk of danger when moving at high speeds in heavy traffic.
Resumo:
The Perspex Machine arose from the unification of computation with geometry. We now report significant redevelopment of both a partial C compiler that generates perspex programs and of a Graphical User Interface (GUI). The compiler is constructed with standard compiler-generator tools and produces both an explicit parse tree for C and an Abstract Syntax Tree (AST) that is better suited to code generation. The GUI uses a hash table and a simpler software architecture to achieve an order of magnitude speed up in processing and, consequently, an order of magnitude increase in the number of perspexes that can be manipulated in real time (now 6,000). Two perspex-machine simulators are provided, one using trans-floating-point arithmetic and the other using transrational arithmetic. All of the software described here is available on the world wide web. The compiler generates code in the neural model of the perspex. At each branch point it uses a jumper to return control to the main fibre. This has the effect of pruning out an exponentially increasing number of branching fibres, thereby greatly increasing the efficiency of perspex programs as measured by the number of neurons required to implement an algorithm. The jumpers are placed at unit distance from the main fibre and form a geometrical structure analogous to a myelin sheath in a biological neuron. Both the perspex jumper-sheath and the biological myelin-sheath share the computational function of preventing cross-over of signals to neurons that lie close to an axon. This is an example of convergence driven by similar geometrical and computational constraints in perspex and biological neurons.
Resumo:
This handbook article gives an historical overview of the development of research into code-switching and discusses its relationship to other language contact phenomena.
Resumo:
In October 2008 UK government announced very ambitious commitment to reduce greenhouse gas emissions of at least 34% by 2020 and by 80% by 2050 against a 1990 baseline. Consequently the government declares that new homes should be built to high environmental standards which means that from 2016 new homes will have to be built to a Zero Carbon standard. The paper sets out to present UK zero carbon residential development achieving the highest, Level 6 of Code for Sustainable Homes standard. Comprehensive information is provided about various environmental aspects of the housing development. Special attention is given to energy efficiency features of the houses and low carbon district heating solution which include biomass boiler, heat pumps, solar collectors and photovoltaic panels. The paper presents also challenges which designers and builders had to face delivering houses of the future.
Resumo:
Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.
Resumo:
This paper presents the mathematical development of a body-centric nonlinear dynamic model of a quadrotor UAV that is suitable for the development of biologically inspired navigation strategies. Analytical approximations are used to find an initial guess of the parameters of the nonlinear model, then parameter estimation methods are used to refine the model parameters using the data obtained from onboard sensors during flight. Due to the unstable nature of the quadrotor model, the identification process is performed with the system in closed-loop control of attitude angles. The obtained model parameters are validated using real unseen experimental data. Based on the identified model, a Linear-Quadratic (LQ) optimal tracker is designed to stabilize the quadrotor and facilitate its translational control by tracking body accelerations. The LQ tracker is tested on an experimental quadrotor UAV and the obtained results are a further means to validate the quality of the estimated model. The unique formulation of the control problem in the body frame makes the controller better suited for bio-inspired navigation and guidance strategies than conventional attitude or position based control systems that can be found in the existing literature.
Resumo:
Background 29 autoimmune diseases, including Rheumatoid Arthritis, gout, Crohn’s Disease, and Systematic Lupus Erythematosus affect 7.6-9.4% of the population. While effective therapy is available, many patients do not follow treatment or use medications as directed. Digital health and Web 2.0 interventions have demonstrated much promise in increasing medication and treatment adherence, but to date many Internet tools have proven disappointing. In fact, most digital interventions continue to suffer from high attrition in patient populations, are burdensome for healthcare professionals, and have relatively short life spans. Objective Digital health tools have traditionally centered on the transformation of existing interventions (such as diaries, trackers, stage-based or cognitive behavioral therapy programs, coupons, or symptom checklists) to electronic format. Advanced digital interventions have also incorporated attributes of Web 2.0 such as social networking, text messaging, and the use of video. Despite these efforts, there has not been little measurable impact in non-adherence for illnesses that require medical interventions, and research must look to other strategies or development methodologies. As a first step in investigating the feasibility of developing such a tool, the objective of the current study is to systematically rate factors of non-adherence that have been reported in past research studies. Methods Grounded Theory, recognized as a rigorous method that facilitates the emergence of new themes through systematic analysis, data collection and coding, was used to analyze quantitative, qualitative and mixed method studies addressing the following autoimmune diseases: Rheumatoid Arthritis, gout, Crohn’s Disease, Systematic Lupus Erythematosus, and inflammatory bowel disease. Studies were only included if they contained primary data addressing the relationship with non-adherence. Results Out of the 27 studies, four non-modifiable and 11 modifiable risk factors were discovered. Over one third of articles identified the following risk factors as common contributors to medication non-adherence (percent of studies reporting): patients not understanding treatment (44%), side effects (41%), age (37%), dose regimen (33%), and perceived medication ineffectiveness (33%). An unanticipated finding that emerged was the need for risk stratification tools (81%) with patient-centric approaches (67%). Conclusions This study systematically identifies and categorizes medication non-adherence risk factors in select autoimmune diseases. Findings indicate that patients understanding of their disease and the role of medication are paramount. An unexpected finding was that the majority of research articles called for the creation of tailored, patient-centric interventions that dispel personal misconceptions about disease, pharmacotherapy, and how the body responds to treatment. To our knowledge, these interventions do not yet exist in digital format. Rather than adopting a systems level approach, digital health programs should focus on cohorts with heterogeneous needs, and develop tailored interventions based on individual non-adherence patterns.
Resumo:
HydroShare is an online, collaborative system being developed for open sharing of hydrologic data and models. The goal of HydroShare is to enable scientists to easily discover and access hydrologic data and models, retrieve them to their desktop or perform analyses in a distributed computing environment that may include grid, cloud or high performance computing model instances as necessary. Scientists may also publish outcomes (data, results or models) into HydroShare, using the system as a collaboration platform for sharing data, models and analyses. HydroShare is expanding the data sharing capability of the CUAHSI Hydrologic Information System by broadening the classes of data accommodated, creating new capability to share models and model components, and taking advantage of emerging social media functionality to enhance information about and collaboration around hydrologic data and models. One of the fundamental concepts in HydroShare is that of a Resource. All content is represented using a Resource Data Model that separates system and science metadata and has elements common to all resources as well as elements specific to the types of resources HydroShare will support. These will include different data types used in the hydrology community and models and workflows that require metadata on execution functionality. The HydroShare web interface and social media functions are being developed using the Drupal content management system. A geospatial visualization and analysis component enables searching, visualizing, and analyzing geographic datasets. The integrated Rule-Oriented Data System (iRODS) is being used to manage federated data content and perform rule-based background actions on data and model resources, including parsing to generate metadata catalog information and the execution of models and workflows. This presentation will introduce the HydroShare functionality developed to date, describe key elements of the Resource Data Model and outline the roadmap for future development.
Resumo:
O objeto deste trabalho é a compreensão do financiamento de empresas em crise, mais especificamente, o financiamento concedido após o pedido de recuperação judicial, como forma de permitir que a empresa saia da situação de crise e retorne à condição de normalidade. Para tanto, nos apropriando do termo cunhado pela doutrina norte-americana, para fazer referência ao aporte de recursos em empresas em dificuldade, utilizaremos o termo DIP financing ou financiamento DIP. Para uma compreensão adequada do objeto, é necessário que entendamos a origem do DIP financing nos Estados Unidos e como é a regulação norte-americana sobre a matéria atualmente. O segundo passo será avaliar a possibilidade de aplicação da mesma estrutura de aporte de recursos no Brasil. Ao estudarmos a origem desse mecanismo nos Estados Unidos, veremos os problemas que surgiram ao longo dos anos e como foram superados jurisprudencialmente e doutrinariamente para que o financiamento DIP se consolidasse como uma das formas de aporte de capital em empresas em crise, culminando no desenvolvimento de uma verdadeira indústria de crédito às empresas em dificuldade. Uma análise dos problemas enfrentados pelo sistema falimentar americano nos levará a hipótese de que, a menos que sejam afirmados mecanismos que assegurem a quem concede o financiamento após o pedido de recuperação judicial, uma super prioridade no recebimento após a recuperação judicial, será possível o desenvolvimento de um mercado de DIP financing no Brasil.
Resumo:
The increase in incidence of infectious diseases worldwide, particularly in developing countries, is worrying. Each year, 14 million people are killed by infectious diseases, mainly HIV/AIDS, respiratory infections, malaria and tuberculosis. Despite the great burden in the poor countries, drug discovery to treat tropical diseases has come to a standstill. There is no interest by the pharmaceutical industry in drug development against the major diseases of the poor countries, since the financial return cannot be guaranteed. This has created an urgent need for new therapeutics to neglected diseases. A possible approach has been the exploitation of the inhibition of unique targets, vital to the pathogen such as the shikimate pathway enzymes, which are present in bacteria, fungi and apicomplexan parasites but are absent in mammals. The chorismate synthase (CS) catalyses the seventh step in this pathway, the conversion of 5-enolpyruvylshikimate-3-phosphate to chorismate. The strict requirement for a reduced flavin mononucleotide and the anti 1,4 elimination are both unusual aspects which make CS reaction unique among flavin-dependent enzymes, representing an important target for the chemotherapeutic agents development. In this review we present the main biochemical features of CS from bacterial and fungal sources and their difference from the apicomplexan CS. The CS mechanisms proposed are discussed and compared with structural data. The CS structures of some organisms are compared and their distinct features analyzed. Some known CS inhibitors are presented and the main characteristics are discussed. The structural and kinetics data reviewed here can be useful for the design of inhibitors. © 2007 Bentham Science Publishers Ltd.
Resumo:
EPSP synthase (EPSPS) is an essential enzyme in the shikimate pathway, transferring the enolpyruvyl group of phosphoenolpyruvate to shikimate-3-phosphate to form 5-enolpyruvyl-3-shikimate phosphate and inorganic phosphate. This enzyme is composed of two domains, which are formed by three copies of βαβαββ-folding units; in between there are two crossover chain segments hinging the nearly topologically symmetrical domains together and allowing conformational changes necessary for substrate conversion. The reaction is ordered with shikimate-3-phosphate binding first, followed by phosphoenolpyruvate, and then by the subsequent release of phosphate and EPSP. N-[phosphomethyl]glycine (glyphosate) is the commercial inhibitor of this enzyme. Apparently, the binding of shikimate-3-phosphate is necessary for glyphosate binding, since it induces the closure of the two domains to form the active site in the interdomain cleft. However, it is somehow controversial whether binding of shikimate-3-phosphate alone is enough to induce the complete conversion to the closed state. The phosphoenolpyruvate binding site seems to be located mainly on the C-terminal domain, while the binding site of shikimate-3-phosphate is located primarily in the N-terminal domain residues. However, recent results demonstrate that the active site of the enzyme undergoes structural changes upon inhibitor binding on a scale that cannot be predicted by conventional computational methods. Studies of molecular docking based on the interaction of known EPSPS structures with (R)- phosphonate TI analogue reveal that more experimental data on the structure and dynamics of various EPSPS-ligand complexes are needed to more effectively apply structure-based drug design of this enzyme in the future. © 2007 Bentham Science Publishers Ltd.
Resumo:
Modeling ERP software means capturing the information necessary for supporting enterprise management. This modeling process goes down through different abstraction layers, from enterprise modeling to code generation. Thus ERP is the kind of system where enterprise engineering undoubtedly has, or should have, a strong influence. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools can jeopardize the advantage brought by source code availability. Therefore, the aim of this paper is to present a development process proposal for the Open Source ERP5 system. The proposed development process aims to cover different abstraction levels, taking into account well established standards and common practices, as well as platform issues. Its main goal is to provide an adaptable meta-process to ERP5 adopters. © 2006 IEEE.
Resumo:
Proton beams in medical applications deal with relatively thick targets like the human head or trunk. Thus, the fidelity of proton computed tomography (pCT) simulations as a tool for proton therapy planning depends in the general case on the accuracy of results obtained for the proton interaction with thick absorbers. GEANT4 simulations of proton energy spectra after passing thick absorbers do not agree well with existing experimental data, as showed previously. Moreover, the spectra simulated for the Bethe-Bloch domain showed an unexpected sensitivity to the choice of low-energy electromagnetic models during the code execution. These observations were done with the GEANT4 version 8.2 during our simulations for pCT. This work describes in more details the simulations of the proton passage through aluminum absorbers with varied thickness. The simulations were done by modifying only the geometry in the Hadrontherapy Example, and for all available choices of the Electromagnetic Physics Models. As the most probable reasons for these effects is some specific feature in the code, or some specific implicit parameters in the GEANT4 manual, we continued our study with version 9.2 of the code. Some improvements in comparison with our previous results were obtained. The simulations were performed considering further applications for pCT development. © 2011 American Institute of Physics.