970 resultados para Exact constraint
Resumo:
A saúde em Portugal vive hoje mudanças significativas. A criação de modelos de Gestão Empresarial em instituições públicas visa a melhor qualidade ao menor custo. A aquisição de equipamento médico, cada vez mais sofisticado, exige das instituições esforços redobrados. A necessidade de redução dos custos acoplada à necessidade de aquisição de tecnologias cada vez mais avançadas exige que as instituições tomem mediadas mais rigorosas para melhorar o processo de aquisição. É importante estabelecer desde o início de um processo de aquisição, as exatas necessidades da instituição, com um conjunto de especificações bem detalhado do produto a adquirir bem como um conjunto exigências que devem ser feitas perante os fornecedores que salvaguardem a instituição. O conhecimento do equipamento a adquirir facilita todo o processo. Assim é de extrema importância garantir o estudo bastante alargado do equipamento, permitindo à instituição uma melhor avaliação do equipamento, aquando da seleção do mesmo. A garantia da confiabilidade metrológica é outro ponto muito importante a ter em conta no processo, uma vez que o sucesso dos cuidados de saúde parte da confiança e segurança que transmitem aos seus utentes. O objetivo deste trabalho é o estudo de Ventiladores Pulmonares (VP) focando essencialmente na seleção e aquisição destes equipamentos. Neste estudo faz-se também um estudo dos procedimentos de avaliação da confiabilidade metrológica dos VP, tendo em vista a definição dos testes de verificação a serem efetuados ao longo do processo de aquisição. É normalizado o Caderno de Encargos (CE) e respetivas especificações/requisitos técnicos, tentando comprar de acordo com as reais necessidades da instituição, visando o menor desperdício e garantido a melhor qualidade.
Resumo:
This thesis presents the Fuzzy Monte Carlo Model for Transmission Power Systems Reliability based studies (FMC-TRel) methodology, which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states. This is followed by a remedial action algorithm, based on Optimal Power Flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. For the system states that cause load curtailment, an optimization approach is applied to reduce the probability of occurrence of these states while minimizing the costs to achieve that reduction. This methodology is of most importance for supporting the transmission system operator decision making, namely in the identification of critical components and in the planning of future investments in the transmission power system. A case study based on Reliability Test System (RTS) 1996 IEEE 24 Bus is presented to illustrate with detail the application of the proposed methodology.
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
OBJECTIVE: To estimate the prevalences of tuberculosis and latent tuberculosis in inmates. METHODS: Observational study was carried out with inmates of a prison and a jail in the State of São Paulo, Southeastern Brazil, between March and December of 2008. Questionnaires were used to collect sociodemographic and epidemiological data. Tuberculin skin testing was administered (PPD-RT23-2TU/0.1 mL), and the following laboratory tests were also performed: sputum smear examination, sputum culture, identification of strains isolated and drug susceptibility testing. The variables were compared using Pearson's chi-square (Χ2) association test, Fisher's exact test and the proportion test. RESULTS: Of the 2,435 inmates interviewed, 2,237 (91.9%) agreed to submit to tuberculin skin testing and of these, 73.0% had positive reactions. The prevalence of tuberculosis was 830.6 per 100,000 inmates. The coefficients of prevalence were 1,029.5/100,000 for inmates of the prison and 525.7/100,000 for inmates of the jail. The sociodemographic characteristics of the inmates in the two groups studied were similar; most of the inmates were young and single with little schooling. The epidemiological characteristics differed between the prison units, with the number of cases of previous tuberculosis and of previous contact with the disease greater in the prison and coughing, expectoration and smoking more common in the jail. Among the 20 Mycobacterium tuberculosis strains identified, 95.0% were sensitive to anti-tuberculosis drugs, and 5.0% were resistant to streptomycin. CONCLUSIONS: The prevalences of tuberculosis and latent tuberculosis were higher in the incarcerated population than in the general population, and they were also higher in the prison than in the jail.
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding he management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.
Resumo:
Nesta dissertação procurou-se comparar os desempenhos de três produtos de proteção da madeira quando expostos a várias condições de degradação. Tendo em consideração a grande quantidade de variáveis quando se faz este tipo de estudos, tentou-se controlar o maior número possível destas, de modo a permitir a comparação com outros estudos que venham a ser feitos futuramente. O objetivo a longo prazo é a contribuição desta dissertação para um conjunto de resultados cada vez mais abrangente, facilitando a escolha do utilizador, quando pretender proteger a madeira das agressões exteriores com este tipo de produtos. A principal conclusão nesta dissertação é que a agressividade do meio tem um papel determinante para os resultados obtidos, bem como as condições de inclinação e orientação da madeira. Situações de má selagem de topos ou não consideração da presença agentes biológicos são outros dois fatores que influenciam a durabilidade da construção. Finalmente, concluiu-se que não existem soluções únicas, tendo que se estudar caso a caso, quais as condições exatas de exposição, de forma a escolher o produto, ou produtos, mais adequados.
Resumo:
Graphics processors were originally developed for rendering graphics but have recently evolved towards being an architecture for general-purpose computations. They are also expected to become important parts of embedded systems hardware -- not just for graphics. However, this necessitates the development of appropriate timing analysis techniques which would be required because techniques developed for CPU scheduling are not applicable. The reason is that we are not interested in how long it takes for any given GPU thread to complete, but rather how long it takes for all of them to complete. We therefore develop a simple method for finding an upper bound on the makespan of a group of GPU threads executing the same program and competing for the resources of a single streaming multiprocessor (whose architecture is based on NVIDIA Fermi, with some simplifying assunptions). We then build upon this method to formulate the derivation of the exact worst-case makespan (and corresponding schedule) as an optimization problem. Addressing the issue of tractability, we also present a technique for efficiently computing a safe estimate of the worstcase makespan with minimal pessimism, which may be used when finding an exact value would take too long.
Resumo:
Este trabalho tem como objetivo intervir na área de Recursos Humanos na Entidade Acolhedora do Projeto. Foi neste contexto que identificamos o Centro Social e Paroquial de S. Martinho de Brufe para a sua realização. O diagnóstico realizado permitiu identificar como potencialidade de intervenção o Sistema de Gestão de Recursos Humanos. Considerando as exigências definidas pelo Modelo de Avaliação da Qualidade das Respostas Sociais (MAQRS) procedeu-se ao diagnóstico da organização acolhedora do projeto. Seguiu-se a configuração exata da potencialidade identificada, o planeamento estratégico e operacional da estratégia. A fase seguinte envolveu a implementação do projeto. Terminamos com a avaliação e apresentação das respetivas medidas necessárias para concretizar da finalidade a que nos propusemos. Os resultados da avaliação permitem concluir que o planeamento e a implementação do projeto foram eficientes e eficazes, uma vez que a auditoria final mostrou a inexistência de não conformidades no projeto de intervenção. Sendo finalidade do projeto garantir que o Centro Social e Paroquial de S. Martinho de Brufe cumpre todos os requisitos do Critério 2 – Pessoas, do Modelo de Avaliação da Qualidade das Respostas Sociais (MAQRS), do Instituto da Segurança Social para submeter com êxito o processo de certificação, em julho de 2014, o documento que se segue contém todos os procedimentos necessários para garantir êxito na sua concretização. O centro Social e Paroquial de S. Martinho de Brufe dispõe dos próximos seis meses (de janeiro a junho de 2014) para apresentar evidências da formalização, sendo esta também condição necessária que antecede a submissão do processo de certificação.
Resumo:
The current industry trend is towards using Commercially available Off-The-Shelf (COTS) based multicores for developing real time embedded systems, as opposed to the usage of custom-made hardware. In typical implementation of such COTS-based multicores, multiple cores access the main memory via a shared bus. This often leads to contention on this shared channel, which results in an increase of the response time of the tasks. Analyzing this increased response time, considering the contention on the shared bus, is challenging on COTS-based systems mainly because bus arbitration protocols are often undocumented and the exact instants at which the shared bus is accessed by tasks are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. This paper makes three contributions towards analyzing tasks scheduled on COTS-based multicores. Firstly, we describe a method to model the memory access patterns of a task. Secondly, we apply this model to analyze the worst case response time for a set of tasks. Although the required parameters to obtain the request profile can be obtained by static analysis, we provide an alternative method to experimentally obtain them by using performance monitoring counters (PMCs). We also compare our work against an existing approach and show that our approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.
Resumo:
Distributed real-time systems, such as factory automation systems, require that computer nodes communicate with a known and low bound on the communication delay. This can be achieved with traditional time division multiple access (TDMA). But improved flexibility and simpler upgrades are possible through the use of TDMA with slot-skipping (TDMA/SS), meaning that a slot is skipped whenever it is not used and consequently the slot after the skipped slot starts earlier. We propose a schedulability analysis for TDMA/SS. We assume knowledge of all message streams in the system, and that each node schedules messages in its output queue according to deadline monotonic. Firstly, we present a non-exact (but fast) analysis and then, at the cost of computation time, we also present an algorithm that computes exact queuing times.
Resumo:
This paper proposes a new strategy to integrate shared resources and precedence constraints among real-time tasks, assuming no precise information on critical sections and computation times is available. The concept of bandwidth inheritance is combined with a capacity sharing and stealing mechanism to efficiently exchange bandwidth among tasks to minimise the degree of deviation from the ideal system’s behaviour caused by inter-application blocking. The proposed Capacity Exchange Protocol (CXP) is simpler than other proposed solutions for sharing resources in open real-time systems since it does not attempt to return the inherited capacity in the same exact amount to blocked servers. This loss of optimality is worth the reduced complexity as the protocol’s behaviour nevertheless tends to be fair and outperforms the previous solutions in highly dynamic scenarios as demonstrated by extensive simulations. A formal analysis of CXP is presented and the conditions under which it is possible to guarantee hard real-time tasks are discussed.
Resumo:
Consider the problem of scheduling sporadically-arriving tasks with implicit deadlines using Earliest-Deadline-First (EDF) on a single processor. The system may undergo changes in its operational modes and therefore the characteristics of the task set may change at run-time. We consider a well-established previously published mode-change protocol and we show that if every mode utilizes at most 50% of the processing capacity then all deadlines are met. We also show that there exists a task set that misses a deadline although the utilization exceeds 50% by just an arbitrarily small amount. Finally, we present, for a relevant special case, an exact schedulability test for EDF with mode change.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
In this work we present a classification of some of the existing Penalty Methods (denominated the Exact Penalty Methods) and describe some of its limitations and estimated. With these methods we can solve problems of optimization with continuous, discrete and mixing constrains, without requiring continuity, differentiability or convexity. The boarding consists of transforming the original problem, in a sequence of problems without constrains, derivate of the initial, making possible its resolution for the methods known for this type of problems. Thus, the Penalty Methods can be used as the first step for the resolution of constrained problems for methods typically used in by unconstrained problems. The work finishes discussing a new class of Penalty Methods, for nonlinear optimization, that adjust the penalty parameter dynamically.