120 resultados para Symbolic Execution
Resumo:
The nature of the Portuguese transition to democracy and the following state crises (1974-1975) created a ‘window of opportunity’ in which the ‘reaction to the past’ was much stronger than in the other Southern or even of Central and Eastern European transitions. In Portugal, initiatives of symbolic rupture with the past began soon after the April 25, 1974, coup d’état and transitional justice policies assumed mainly three formulas. First, the institutional reforms directed primarily to abusive state institutions such as the political police (PIDE-DGS) and political courts (Plenary courts) in order to dismantle the repressive apparatus and prevent further human rights abuses and impunity. Secondly, the criminal prosecutions addressed to perpetrators considered as being the most responsible for repression and abuses. Finally, lustration or political purges (saneamentos, the term used in Portugal to designate political purges) which were, in fact, the most common form of political justice in Portuguese transition to democracy. This paper deals with the peculiarities of transitional justice in Portugal devoting a particular attention to the judicial, a key sector to understand the way the Portuguese dealt with their authoritarian past.
Resumo:
Trabalho de projeto apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Gestão Estratégica das Relações Públicas.
Resumo:
Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Gestão Estratégica das Relações Públicas.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Estruturas
Resumo:
Relatório de Estágio para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações
Resumo:
Relatório Final de Estágio apresentado à Escola Superior de Dança, com vista à obtenção do grau de Mestre em Ensino de Dança.
Resumo:
Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.
Resumo:
In global scientific experiments with collaborative scenarios involving multinational teams there are big challenges related to data access, namely data movements are precluded to other regions or Clouds due to the constraints on latency costs, data privacy and data ownership. Furthermore, each site is processing local data sets using specialized algorithms and producing intermediate results that are helpful as inputs to applications running on remote sites. This paper shows how to model such collaborative scenarios as a scientific workflow implemented with AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic), a decentralized framework offering a feasible solution to run the workflow activities on distributed data centers in different regions without the need of large data movements. The AWARD workflow activities are independently monitored and dynamically reconfigured and steering by different users, namely by hot-swapping the algorithms to enhance the computation results or by changing the workflow structure to support feedback dependencies where an activity receives feedback output from a successor activity. A real implementation of one practical scenario and its execution on multiple data centers of the Amazon Cloud is presented including experimental results with steering by multiple users.
Resumo:
Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.
Resumo:
Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).
Resumo:
Mestrado em Higiene e Segurança no Trabalho.
Resumo:
The three-dimensional (3D) exact solutions developed in the early 1970s by Pagano for simply supported multilayered orthotropic composite plates and later in the 1990s extended to piezoelectric plates by Heyliger have been extremely useful in the assessment and development of advanced laminated plate theories and related finite element models. In fact, the well-known test cases provided by Pagano and by Heyliger in those earlier works are still used today as benchmark solutions. However, the limited number of test cases whose 3D exact solutions have been published has somewhat restricted the assessment of recent advanced models to the same few test cases. This work aims to provide additional test cases to serve as benchmark exact solutions for the static analysis of multilayered piezoelectric composite plates. The method introduced by Heyliger to derive the 3D exact solutions has been successfully implemented using symbolic computing and a number of new test cases are here presented thoroughly. Specifically, two multilayered plates using PVDF piezoelectric material are selected as test cases under two different loading conditions and considering three plate aspect ratios for thick, moderately thick and thin plate, in a total of 12 distinct test cases. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Functionally graded materials are a type of composite materials which are tailored to provide continuously varying properties, according to specific constituent's mixing distributions. These materials are known to provide superior thermal and mechanical performances when compared to the traditional laminated composites, because of this continuous properties variation characteristic, which enables among other advantages, smoother stresses distribution profiles. Therefore the growing trend on the use of these materials brings together the interest and the need for getting optimum configurations concerning to each specific application. In this work it is studied the use of particle swarm optimization technique for the maximization of a functionally graded sandwich beam bending stiffness. For this purpose, a set of case studies is analyzed, in order to enable to understand in a detailed way, how the different optimization parameters tuning can influence the whole process. It is also considered a re-initialization strategy, which is not a common approach in particle swarm optimization as far as it was possible to conclude from the published research works. As it will be shown, this strategy can provide good results and also present some advantages in some conditions. This work was developed and programmed on symbolic computation platform Maple 14. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.