18 resultados para mathematical tasks
em Instituto Politécnico do Porto, Portugal
Fuzzy Monte Carlo mathematical model for load curtailment minimization in transmission power systems
Resumo:
This paper presents a methodology which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states by Monte Carlo simulation. This is followed by a remedial action algorithm, based on optimal power flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. In order to illustrate the application of the proposed methodology to a practical case, the paper will include a case study for the Reliability Test System (RTS) 1996 IEEE 24 BUS.
Resumo:
Mathematical Program with Complementarity Constraints (MPCC) finds applica- tion in many fields. As the complementarity constraints fail the standard Linear In- dependence Constraint Qualification (LICQ) or the Mangasarian-Fromovitz constraint qualification (MFCQ), at any feasible point, the nonlinear programming theory may not be directly applied to MPCC. However, the MPCC can be reformulated as NLP problem and solved by nonlinear programming techniques. One of them, the Inexact Restoration (IR) approach, performs two independent phases in each iteration - the feasibility and the optimality phases. This work presents two versions of an IR algorithm to solve MPCC. In the feasibility phase two strategies were implemented, depending on the constraints features. One gives more importance to the complementarity constraints, while the other considers the priority of equality and inequality constraints neglecting the complementarity ones. The optimality phase uses the same approach for both algorithm versions. The algorithms were implemented in MATLAB and the test problems are from MACMPEC collection.
Resumo:
Cloud computing is increasingly being adopted in different scenarios, like social networking, business applications, scientific experiments, etc. Relying in virtualization technology, the construction of these computing environments targets improvements in the infrastructure, such as power-efficiency and fulfillment of users’ SLA specifications. The methodology usually applied is packing all the virtual machines on the proper physical servers. However, failure occurrences in these networked computing systems can induce substantial negative impact on system performance, deviating the system from ours initial objectives. In this work, we propose adapted algorithms to dynamically map virtual machines to physical hosts, in order to improve cloud infrastructure power-efficiency, with low impact on users’ required performance. Our decision making algorithms leverage proactive fault-tolerance techniques to deal with systems failures, allied with virtual machine technology to share nodes resources in an accurately and controlled manner. The results indicate that our algorithms perform better targeting power-efficiency and SLA fulfillment, in face of cloud infrastructure failures.
Resumo:
On this paper we present a modified regularization scheme for Mathematical Programs with Complementarity Constraints. In the regularized formulations the complementarity condition is replaced by a constraint involving a positive parameter that can be decreased to zero. In our approach both the complementarity condition and the nonnegativity constraints are relaxed. An iterative algorithm is implemented in MATLAB language and a set of AMPL problems from MacMPEC database were tested.
Resumo:
Transdermal biotechnologies are an ever increasing field of interest, due to the medical and pharmaceutical applications that they underlie. There are several mathematical models at use that permit a more inclusive vision of pure experimental data and even allow practical extrapolation for new dermal diffusion methodologies. However, they grasp a complex variety of theories and assumptions that allocate their use for specific situations. Models based on Fick's First Law found better use in contexts where scaled particle theory Models would be extensive in time-span but the reciprocal is also true, as context of transdermal diffusion of particular active compounds changes. This article reviews extensively the various theoretical methodologies for studying dermic diffusion in the rate limiting dermic barrier, the stratum corneum, and systematizes its characteristics, their proper context of application, advantages and limitations, as well as future perspectives.
Resumo:
A preliminary version of this paper appeared in Proceedings of the 31st IEEE Real-Time Systems Symposium, 2010, pp. 239–248.
Resumo:
It is generally challenging to determine end-to-end delays of applications for maximizing the aggregate system utility subject to timing constraints. Many practical approaches suggest the use of intermediate deadline of tasks in order to control and upper-bound their end-to-end delays. This paper proposes a unified framework for different time-sensitive, global optimization problems, and solves them in a distributed manner using Lagrangian duality. The framework uses global viewpoints to assign intermediate deadlines, taking resource contention among tasks into consideration. For soft real-time tasks, the proposed framework effectively addresses the deadline assignment problem while maximizing the aggregate quality of service. For hard real-time tasks, we show that existing heuristic solutions to the deadline assignment problem can be incorporated into the proposed framework, enriching their mathematical interpretation.
Resumo:
High-level parallel languages offer a simple way for application programmers to specify parallelism in a form that easily scales with problem size, leaving the scheduling of the tasks onto processors to be performed at runtime. Therefore, if the underlying system cannot efficiently execute those applications on the available cores, the benefits will be lost. In this paper, we consider how to schedule highly heterogenous parallel applications that require real-time performance guarantees on multicore processors. The paper proposes a novel scheduling approach that combines the global Earliest Deadline First (EDF) scheduler with a priority-aware work-stealing load balancing scheme, which enables parallel realtime tasks to be executed on more than one processor at a given time instant. Experimental results demonstrate the better scalability and lower scheduling overhead of the proposed approach comparatively to an existing real-time deadline-oriented scheduling class for the Linux kernel.
Resumo:
Embedded real-time systems often have to support the embedding system in very different and changing application scenarios. An aircraft taxiing, taking off and in cruise flight is one example. The different application scenarios are reflected in the software structure with a changing task set and thus different operational modes. At the same time there is a strong push for integrating previously isolated functionalities in single-chip multicore processors. On such multicores the behavior of the system during a mode change, when the systems transitions from one mode to another, is complex but crucial to get right. In the past we have investigated mode change in multiprocessor systems where a mode change requires a complete change of task set. Now, we present the first analysis which considers mode changes in multicore systems, which use global EDF to schedule a set of mode independent (MI) and mode specific (MS) tasks. In such systems, only the set of MS tasks has to be replaced during mode changes, without jeopardizing the schedulability of the MI tasks. Of prime concern is that the mode change is safe and efficient: i.e. the mode change needs to be performed in a predefined time window and no deadlines may be missed as a function of the mode change.
Resumo:
Consider the problem of scheduling sporadic tasks on a multiprocessor platform under mutual exclusion constraints. We present an approach which appears promising for allowing large amounts of parallel task executions and still ensures low amounts of blocking.
Resumo:
Consolidation consists in scheduling multiple virtual machines onto fewer servers in order to improve resource utilization and to reduce operational costs due to power consumption. However, virtualization technologies do not offer performance isolation, causing applications’ slowdown. In this work, we propose a performance enforcing mechanism, composed of a slowdown estimator, and a interference- and power-aware scheduling algorithm. The slowdown estimator determines, based on noisy slowdown data samples obtained from state-of-the-art slowdown meters, if tasks will complete within their deadlines, invoking the scheduling algorithm if needed. When invoked, the scheduling algorithm builds performance and power aware virtual clusters to successfully execute the tasks. We conduct simulations injecting synthetic jobs which characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our strategy can be efficiently integrated with state-of-the-art slowdown meters to fulfil contracted SLAs in real-world environments, while reducing operational costs in about 12%.
Resumo:
Pultrusion is an industrial process used to produce glass fibers reinforced polymers profiles. These materials are worldwide used when performing characteristics, such as great electrical and magnetic insulation, high strength to weight ratio, corrosion and weather resistance, long service life and minimal maintenance are required. In this study, we present the results of the modelling and simulation of heat flow through a pultrusion die by means of Finite Element Analysis (FEA). The numerical simulation was calibrated based on temperature profiles computed from thermographic measurements carried out during pultrusion manufacturing process. Obtained results have shown a maximum deviation of 7%, which is considered to be acceptable for this type of analysis, and is below to the 10% value, previously specified as maximum deviation. © 2011, Advanced Engineering Solutions.
Resumo:
In this paper we study a delay mathematical model for the dynamics of HIV in HIV-specific CD4 + T helper cells. We modify the model presented by Roy and Wodarz in 2012, where the HIV dynamics is studied, considering a single CD4 + T cell population. Non-specific helper cells are included as alternative target cell population, to account for macrophages and dendritic cells. In this paper, we include two types of delay: (1) a latent period between the time target cells are contacted by the virus particles and the time the virions enter the cells and; (2) virus production period for new virions to be produced within and released from the infected cells. We compute the reproduction number of the model, R0, and the local stability of the disease free equilibrium and of the endemic equilibrium. We find that for values of R0<1, the model approaches asymptotically the disease free equilibrium. For values of R0>1, the model approximates asymptotically the endemic equilibrium. We observe numerically the phenomenon of backward bifurcation for values of R0⪅1. This statement will be proved in future work. We also vary the values of the latent period and the production period of infected cells and free virus. We conclude that increasing these values translates in a decrease of the reproduction number. Thus, a good strategy to control the HIV virus should focus on drugs to prolong the latent period and/or slow down the virus production. These results suggest that the model is mathematically and epidemiologically well-posed.
Resumo:
Neste estudo, focado na aprendizagem do manuseio do dinheiro, pretendeu-se que os alunos adquirissem competências que os habilitasse a um maior grau de independência e participação na vida em sociedade, desempenhando tarefas de cariz financeiro de forma mais independente, por exemplo, compra de produtos, pagamento de serviços e gestão do dinheiro. Para alcançar o pretendido, utilizou-se a metodologia do ensino direto, com tarefas estruturadas. Numa fase inicial o investigador prestava apoio constante aos alunos, que foi diminuindo gradualmente à medida que atingiam as competências relacionadas com o dinheiro. Na fase final, os alunos realizaram as tarefas propostas de forma autónoma. Construído como um estudo de caso, os dados foram recolhidos através de observação direta e de provas de monitorização. Os alunos começaram por realizar uma avaliação inicial para delinear a linha de base da intervenção. Posteriormente, foi realizada a intervenção baseada no ensino direto, com recurso ao computador, à calculadora, a provas de monitorização e ao manuseio de dinheiro. O computador foi utilizado na intervenção como tecnologia de apoio à aprendizagem, permitindo a realização de jogos interativos e consulta de materiais. No final da intervenção os alunos revelaram autonomia na resolução das tarefas, pois já tinham automatizado os processos matemáticas para saber manusear corretamente a moeda euro. O ensino direto auxiliou os alunos a reterem as competências matemáticas essenciais de manuseamento do dinheiro, compondo quantias, efetuando pagamentos e conferindo trocos, que muito podem contribuir para terem uma participação independente na vida em sociedade
Resumo:
11th IEEE World Conference on Factory Communication Systems (WFCS 2015). 27 to 29, May, 2015, TII-SS-2: Scheduling and Performance Analysis. Palma de Mallorca, Spain.