950 resultados para Symbolic Execution
Resumo:
A construction project is a group of discernible tasks or activities that are conduct-ed in a coordinated effort to accomplish one or more objectives. Construction projects re-quire varying levels of cost, time and other resources. To plan and schedule a construction project, activities must be defined sufficiently. The level of detail determines the number of activities contained within the project plan and schedule. So, finding feasible schedules which efficiently use scarce resources is a challenging task within project management. In this context, the well-known Resource Constrained Project Scheduling Problem (RCPSP) has been studied during the last decades. In the RCPSP the activities of a project have to be scheduled such that the makespan of the project is minimized. So, the technological precedence constraints have to be observed as well as limitations of the renewable resources required to accomplish the activities. Once started, an activity may not be interrupted. This problem has been extended to a more realistic model, the multi-mode resource con-strained project scheduling problem (MRCPSP), where each activity can be performed in one out of several modes. Each mode of an activity represents an alternative way of combining different levels of resource requirements with a related duration. Each renewable resource has a limited availability for the entire project such as manpower and machines. This paper presents a hybrid genetic algorithm for the multi-mode resource-constrained pro-ject scheduling problem, in which multiple execution modes are available for each of the ac-tivities of the project. The objective function is the minimization of the construction project completion time. To solve the problem, is applied a two-level genetic algorithm, which makes use of two separate levels and extend the parameterized schedule generation scheme. It is evaluated the quality of the schedules and presents detailed comparative computational re-sults for the MRCPSP, which reveal that this approach is a competitive algorithm.
Resumo:
OBJECTIVE To evaluate the validity and reliability of an instrument that evaluates the structure of primary health care units for the treatment of tuberculosis.METHODS This cross-sectional study used simple random sampling and evaluated 1,037 health care professionals from five Brazilian municipalities (Natal, state of Rio Grande do Norte; Cabedelo, state of Paraíba; Foz do Iguaçu, state of Parana; Sao José do Rio Preto, state of Sao Paulo, and Uberaba, state of Minas Gerais) in 2011. Structural indicators were identified and validated, considering different methods of organization of the health care system in the municipalities of different population sizes. Each structure represented the organization of health care services and contained the resources available for the execution of health care services: physical resources (equipment, consumables, and facilities); human resources (number and qualification); and resources for maintenance of the existing infrastructure and technology (deemed as the organization of health care services). The statistical analyses used in the validation process included reliability analysis, exploratory factor analysis, and confirmatory factor analysis.RESULTS The validation process indicated the retention of five factors, with 85.9% of the total variance explained, internal consistency between 0.6460 and 0.7802, and quality of fit of the confirmatory factor analysis of 0.995 using the goodness-of-fit index. The retained factors comprised five structural indicators: professionals involved in the care of tuberculosis patients, training, access to recording instruments, availability of supplies, and coordination of health care services with other levels of care. Availability of supplies had the best performance and the lowest coefficient of variation among the services evaluated. The indicators of assessment of human resources and coordination with other levels of care had satisfactory performance, but the latter showed the highest coefficient of variation. The performance of the indicators “training” and “access to recording instruments” was inferior to that of other indicators.CONCLUSIONS The instrument showed feasibility of application and potential to assess the structure of primary health care units for the treatment of tuberculosis.
Resumo:
The three-dimensional (3D) exact solutions developed in the early 1970s by Pagano for simply supported multilayered orthotropic composite plates and later in the 1990s extended to piezoelectric plates by Heyliger have been extremely useful in the assessment and development of advanced laminated plate theories and related finite element models. In fact, the well-known test cases provided by Pagano and by Heyliger in those earlier works are still used today as benchmark solutions. However, the limited number of test cases whose 3D exact solutions have been published has somewhat restricted the assessment of recent advanced models to the same few test cases. This work aims to provide additional test cases to serve as benchmark exact solutions for the static analysis of multilayered piezoelectric composite plates. The method introduced by Heyliger to derive the 3D exact solutions has been successfully implemented using symbolic computing and a number of new test cases are here presented thoroughly. Specifically, two multilayered plates using PVDF piezoelectric material are selected as test cases under two different loading conditions and considering three plate aspect ratios for thick, moderately thick and thin plate, in a total of 12 distinct test cases. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Functionally graded materials are a type of composite materials which are tailored to provide continuously varying properties, according to specific constituent's mixing distributions. These materials are known to provide superior thermal and mechanical performances when compared to the traditional laminated composites, because of this continuous properties variation characteristic, which enables among other advantages, smoother stresses distribution profiles. Therefore the growing trend on the use of these materials brings together the interest and the need for getting optimum configurations concerning to each specific application. In this work it is studied the use of particle swarm optimization technique for the maximization of a functionally graded sandwich beam bending stiffness. For this purpose, a set of case studies is analyzed, in order to enable to understand in a detailed way, how the different optimization parameters tuning can influence the whole process. It is also considered a re-initialization strategy, which is not a common approach in particle swarm optimization as far as it was possible to conclude from the published research works. As it will be shown, this strategy can provide good results and also present some advantages in some conditions. This work was developed and programmed on symbolic computation platform Maple 14. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this paper is to discuss the linear solution of equality constrained problems by using the Frontal solution method without explicit assembling. Design/methodology/approach - Re-written frontal solution method with a priori pivot and front sequence. OpenMP parallelization, nearly linear (in elimination and substitution) up to 40 threads. Constraints enforced at the local assembling stage. Findings - When compared with both standard sparse solvers and classical frontal implementations, memory requirements and code size are significantly reduced. Research limitations/implications - Large, non-linear problems with constraints typically make use of the Newton method with Lagrange multipliers. In the context of the solution of problems with large number of constraints, the matrix transformation methods (MTM) are often more cost-effective. The paper presents a complete solution, with topological ordering, for this problem. Practical implications - A complete software package in Fortran 2003 is described. Examples of clique-based problems are shown with large systems solved in core. Social implications - More realistic non-linear problems can be solved with this Frontal code at the core of the Newton method. Originality/value - Use of topological ordering of constraints. A-priori pivot and front sequences. No need for symbolic assembling. Constraints treated at the core of the Frontal solver. Use of OpenMP in the main Frontal loop, now quantified. Availability of Software.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Informática.
Resumo:
The international Electrotechnical Commission (IEC) 61499 architecture incorporated several function block with which distributed control application may be developed, and how these are interpreted and executed. However, due the distributed nature of the control applications, many issues also need to be taken into account. Most of these are due to the new error model and failure modes of the distributed hardware on which the distributed application is executed and also due the incomplete standards definition of the execution models. IEC 61499 frameworks does not clarify how to handle with replication of software and hardware components. In this paper we propose a replication model for IEC 61499 applications and which mechanisms and protocols may be used for their support.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
This paper presents a genetic algorithm for the multimode resource-constrained project scheduling problem (MRCPSP), in which multiple execution modes are available for each of the activities of the project. The objective function is the minimization of the construction project completion time. To solve the problem, is applied a two-level genetic algorithm, which makes use of two separate levels and extend the parameterized schedule generation scheme by introducing an improvement procedure. It is evaluated the quality of the schedule and present detailed comparative computational results for the MRCPSP, which reveal that this approach is a competitive algorithm.
Resumo:
Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Políticas de Administração e Gestão de Serviços de Saúde
Resumo:
Mestrado em Intervenção Sócio-Organizacional na Saúde - Área de especialização: Políticas de Administração e Gestão de Serviços de Saúde
Resumo:
A comparative study concerning the robustness of a novel, Fixed Point Transformations/Singular Value Decomposition (FPT/SVD)-based adaptive controller and the Slotine-Li (S&L) approach is given by numerical simulations using a three degree of freedom paradigm of typical Classical Mechanical systems, the cart + double pendulum. The effects of the imprecision of the available dynamical model, presence of dynamic friction at the axles of the drives, and the existence of external disturbance forces unknown and not modeled by the controller are considered. While the Slotine-Li approach tries to identify the parameters of the formally precise, available analytical model of the controlled system with the implicit assumption that the generalized forces are precisely known, the novel one makes do with a very rough, affine form and a formally more precise approximate model of that system, and uses temporal observations of its desired vs. realized responses. Furthermore, it does not assume the lack of unknown perturbations caused either by internal friction and/or external disturbances. Its another advantage is that it needs the execution of the SVD as a relatively time-consuming operation on a grid of a rough system-model only one time, before the commencement of the control cycle within which it works only with simple computations. The simulation examples exemplify the superiority of the FPT/SVD-based control that otherwise has the deficiency that it can get out of the region of its convergence. Therefore its design and use needs preliminary simulation investigations. However, the simulations also exemplify that its convergence can be guaranteed for various practical purposes.
Resumo:
The rapid increase in the use of microprocessor-based systems in critical areas, where failures imply risks to human lives, to the environment or to expensive equipment, significantly increased the need for dependable systems, able to detect, tolerate and eventually correct faults. The verification and validation of such systems is frequently performed via fault injection, using various forms and techniques. However, as electronic devices get smaller and more complex, controllability and observability issues, and sometimes real time constraints, make it harder to apply most conventional fault injection techniques. This paper proposes a fault injection environment and a scalable methodology to assist the execution of real-time fault injection campaigns, providing enhanced performance and capabilities. Our proposed solutions are based on the use of common and customized on-chip debug (OCD) mechanisms, present in many modern electronic devices, with the main objective of enabling the insertion of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented starting from basic Components Off-The-Shelf (COTS) microprocessors, equipped with real-time OCD infrastructures, to improved solutions based on modified interfaces, and dedicated OCD circuitry that enhance fault injection capabilities and performance. All methodologies and configurations were evaluated and compared concerning performance gain and silicon overhead.