980 resultados para execution
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Concurrent programming is a difficult and error-prone task because the programmer must reason about multiple threads of execution and their possible interleavings. A concurrent program must synchronize the concurrent accesses to shared memory regions, but this is not enough to prevent all anomalies that can arise in a concurrent setting. The programmer can misidentify the scope of the regions of code that need to be atomic, resulting in atomicity violations and failing to ensure the correct behavior of the program. Executing a sequence of atomic operations may lead to incorrect results when these operations are co-related. In this case, the programmer may be required to enforce the sequential execution of those operations as a whole to avoid atomicity violations. This situation is specially common when the developer makes use of services from third-party packages or modules. This thesis proposes a methodology, based on the design by contract methodology, to specify which sequences of operations must be executed atomically. We developed an analysis that statically verifies that a client of a module is respecting its contract, allowing the programmer to identify the source of possible atomicity violations.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
RESUMO - O ozono é o principal componente da poluição fotoquímica do ar. Como agente irritante do aparelho respiratório, os seus efeitos sobre a saúde caracterizam-se, essencialmente, por tosse, dispneia, desconforto torácico e alterações da função pulmonar, encontrando-se também associadas à exposição ambiental a O3 tanto uma maior frequência e gravidade de crises de asma como a ocorrência de quadros clínicos de irritação conjuntival. É sobretudo a partir dos anos 50, com a descoberta de concentrações elevadas de ozono em ambientes de trabalho respeitantes à actividade de soldadura «a arco», que aquele gás passa a ser encarado como factor profissional de risco. No início dos anos 60 surgem os primeiros estudos de exposição a O3 em cabinas de avião, suscitados pela ocorrência, em tripulantes e passageiros, de queixas clínicas de irritação do tracto respiratório. Esta sintomatologia era, até então, atribuída à acção de outros factores, designadamente o sistema de ventilação e o baixo teor de humidade do ar. Posteriormente, alguns estudos revelaram que, em voos comerciais subsónicos, os teores elevados de O3 observados no interior das cabinas poderiam ser provocados pela sua insuficiente destruição nos sistemas de entrada de ar.O presente estudo, efectuado em voos de longo curso realizados em aeronaves Airbus A340-300 numa única rota comercial, teve por objectivo avaliar a exposição a ozono no ar interior em cabina de avião. Os teores médios de concentração de ozono observados foram inferiores aos valores susceptíveis de provocarem efeitos adversos sobre o aparelho respiratório. Como valor máximo instantâneo, foi atingida a concentração de 152 ppb. Adicionalmente, foi constatada a influência das estações do ano nos teores de O3. O conjunto dos resultados obtidos permite concluir que as concentrações de ozono no ar interior nas cabinas de avião estudadas são inferiores às correspondentes concentrações máximas admissíveis, tendo, em todos os voos, sido observado o cumprimento da norma da FAA respeitante à protecção da exposição ao ozono em cabinas de aeronaves de aviação comercial.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para a obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
Chlamydia trachomatis has a unique obligate intracellular developmental cycle that ends by the lysis of the cell and/or the extrusion of the bacteria in order to allow for re-infections. While Chlamydia trachomatis infections are often asymptomatic the diagnosis of Chlamydia trachomatis is usually late, occurring after manifestation of persistency. Investigations on the consequences of long-term infections and the molecular mechanisms behind it will reveal light to what extent bacteria can modulate host cell function and what the ultimate fate of host cells after clearance of an infection is. Such studies on the host cell fate could be greatly facilitated if the infected cells become permanently marked during and after the infection. Therefore, this project intends to develop a new genetic tool that would allow permanently labeling of Chlamydia trachomatis host cells. The plan was to generate a Chlamydia trachomatis strain that encodes a recombinant CRE recombinase, fused to a secretory effector function of the Chlamydia type 3 secretion system (T3SS). Upon translocation into the host cell, this recombinant CRE enzyme could then, owing to its site-specific recombination function, switch a reporter gene contained in the host cell genome. To this end, the reporter line carried a membrane-tagged tdTomato (mT) gene flanked by two LoxP sequences followed by a GFP gene. The translocation of the recombinant CRE recombinase into this cell line was designed to trigger the recombination of the LoxP sites whereby the cells would turn from red fluorescence to green as an irreversible label of the infected cells. Successful execution of this mechanism would allow to draw a direct link between Chlamydia trachomatis infection and the subsequent fate of the infected cell.
Resumo:
The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.
Resumo:
This work is divided into two distinct parts. The first part consists of the study of the metal organic framework UiO-66Zr, where the aim was to determine the force field that best describes the adsorption equilibrium properties of two different gases, methane and carbon dioxide. The other part of the work focuses on the study of the single wall carbon nanotube topology for ethane adsorption; the aim was to simplify as much as possible the solid-fluid force field model to increase the computational efficiency of the Monte Carlo simulations. The choice of both adsorbents relies on their potential use in adsorption processes, such as the capture and storage of carbon dioxide, natural gas storage, separation of components of biogas, and olefin/paraffin separations. The adsorption studies on the two porous materials were performed by molecular simulation using the grand canonical Monte Carlo (μ,V,T) method, over the temperature range of 298-343 K and pressure range 0.06-70 bar. The calibration curves of pressure and density as a function of chemical potential and temperature for the three adsorbates under study, were obtained Monte Carlo simulation in the canonical ensemble (N,V,T); polynomial fit and interpolation of the obtained data allowed to determine the pressure and gas density at any chemical potential. The adsorption equilibria of methane and carbon dioxide in UiO-66Zr were simulated and compared with the experimental data obtained by Jasmina H. Cavka et al. The results show that the best force field for both gases is a chargeless united-atom force field based on the TraPPE model. Using this validated force field it was possible to estimate the isosteric heats of adsorption and the Henry constants. In the Grand-Canonical Monte Carlo simulations of carbon nanotubes, we conclude that the fastest type of run is obtained with a force field that approximates the nanotube as a smooth cylinder; this approximation gives execution times that are 1.6 times faster than the typical atomistic runs.
Resumo:
The Intel R Xeon PhiTM is the first processor based on Intel’s MIC (Many Integrated Cores) architecture. It is a co-processor specially tailored for data-parallel computations, whose basic architectural design is similar to the ones of GPUs (Graphics Processing Units), leveraging the use of many integrated low computational cores to perform parallel computations. The main novelty of the MIC architecture, relatively to GPUs, is its compatibility with the Intel x86 architecture. This enables the use of many of the tools commonly available for the parallel programming of x86-based architectures, which may lead to a smaller learning curve. However, programming the Xeon Phi still entails aspects intrinsic to accelerator-based computing, in general, and to the MIC architecture, in particular. In this thesis we advocate the use of algorithmic skeletons for programming the Xeon Phi. Algorithmic skeletons abstract the complexity inherent to parallel programming, hiding details such as resource management, parallel decomposition, inter-execution flow communication, thus removing these concerns from the programmer’s mind. In this context, the goal of the thesis is to lay the foundations for the development of a simple but powerful and efficient skeleton framework for the programming of the Xeon Phi processor. For this purpose we build upon Marrow, an existing framework for the orchestration of OpenCLTM computations in multi-GPU and CPU environments. We extend Marrow to execute both OpenCL and C++ parallel computations on the Xeon Phi. We evaluate the newly developed framework, several well-known benchmarks, like Saxpy and N-Body, will be used to compare, not only its performance to the existing framework when executing on the co-processor, but also to assess the performance on the Xeon Phi versus a multi-GPU environment.
Resumo:
To cope with modernity, the interesting of having a fully automated house has been increasing over the years, as technology evolves and as our lives become more stressful and overloaded. An automation system provides a way to simplify some daily tasks, allowing us to have more spare time to perform activities where we are really needed. There are some systems in this domain that try to implement these characteristics, but this kind of technology is at its early stages of evolution being that it is still far away of empowering the user with the desired control over a habitation. The reason is that the mentioned systems miss some important features such as adaptability, extension and evolution. These systems, developed from a bottom-up approach, are often tailored for programmers and domain experts, discarding most of the times the end users that remain with unfinished interfaces or products that they have difficulty to control. Moreover, complex behaviors are avoided, since they are extremely difficult to implement mostly due to the necessity of handling priorities, conflicts and device calibration. Besides, these solutions are only reachable at very high costs, yet they still have the limitation of being difficult to configure by non-technical people once in runtime operation. As a result, it is necessary to create a tool that allows the execution of several automated actions, with an interface that is easy to use but at the same time supports all the main features of this domain. It is also desirable that this tool is independent of the hardware so it can be reused, thus a Model Driven Development approach (MDD) is the ideal option, as it is a method that follows those principles. Since the automation domain has some very specific concepts, the use of models should be combined with a Domain Specific Language (DSL). With these two methods, it is possible to create a solution that is adapted to the end users, but also to domain experts and programmers due to the several levels of abstraction that can be added to diminish the complexity of use. The aim of this thesis is to design a Domain Specific Language (DSL) that uses the Model Driven Development approach (MDD), with the purpose of supporting Home Automation (HA) concepts. In this implementation, the development of simple and complex scenarios should be supported and will be one of the most important concerns. This DSL should also support other significant features in this domain, such as the ability to schedule tasks, which is something that is limited in the current existing solutions.
Resumo:
Most of today’s systems, especially when related to the Web or to multi-agent systems, are not standalone or independent, but are part of a greater ecosystem, where they need to interact with other entities, react to complex changes in the environment, and act both over its own knowledge base and on the external environment itself. Moreover, these systems are clearly not static, but are constantly evolving due to the execution of self updates or external actions. Whenever actions and updates are possible, the need to ensure properties regarding the outcome of performing such actions emerges. Originally purposed in the context of databases, transactions solve this problem by guaranteeing atomicity, consistency, isolation and durability of a special set of actions. However, current transaction solutions fail to guarantee such properties in dynamic environments, since they cannot combine transaction execution with reactive features, or with the execution of actions over domains that the system does not completely control (thus making rolling back a non-viable proposition). In this thesis, we investigate what and how transaction properties can be ensured over these dynamic environments. To achieve this goal, we provide logic-based solutions, based on Transaction Logic, to precisely model and execute transactions in such environments, and where knowledge bases can be defined by arbitrary logic theories.
Resumo:
Learning novel actions and skills is a prevalent ability across multiple species and a critical feature for survival and competence in a constantly changing world. Novel actions are generated and learned through a process of trial and error, where an animal explores the environment around itself, generates multiple patterns of behavior and selects the ones that increase the likelihood of positive outcomes. Proper adaptation and execution of the selected behavior requires the coordination of several biomechanical features by the animal. Cortico-basal ganglia circuits and loops are critically involved in the acquisition, learning and consolidation of motor skills.(...)
Resumo:
OutSystems Platform is used to develop, deploy, and maintain enterprise web an mobile web applications. Applications are developed through a visual domain specific language, in an integrated development environment, and compiled to a standard stack of web technologies. In the platform’s core, there is a compiler and a deployment service that transform the visual model into a running web application. As applications grow, compilation and deployment times increase as well, impacting the developer’s productivity. In the previous model, a full application was the only compilation and deployment unit. When the developer published an application, even if he only changed a very small aspect of it, the application would be fully compiled and deployed. Our goal is to reduce compilation and deployment times for the most common use case, in which the developer performs small changes to an application before compiling and deploying it. We modified the OutSystems Platform to support a new incremental compilation and deployment model that reuses previous computations as much as possible in order to improve performance. In our approach, the full application is broken down into smaller compilation and deployment units, increasing what can be cached and reused. We also observed that this finer model would benefit from a parallel execution model. Hereby, we created a task driven Scheduler that executes compilation and deployment tasks in parallel. Our benchmarks show a substantial improvement of the compilation and deployment process times for the aforementioned development scenario.