61 resultados para accelerated failure time model
em Instituto Politécnico do Porto, Portugal
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.
Resumo:
Nos últimos anos tem-se assistido à introdução de novos dispositivos de medição da poluição do ar baseados na utilização de sensores de baixo custo. A utilização menos complexa destes sistemas, possibilita a obtenção de dados com elevada resolução temporal e espacial, abrindo novas oportunidades para diferentes metodologias de estudos de monitorização da poluição do ar. Apesar de apresentarem capacidades analíticas distantes dos métodos de referência, a utilização destes sensores tem sido sugerida e incentivada pela União Europeia no âmbito das medições indicativas previstas na Diretiva 2008/50/CE, com uma incerteza expandida máxima de 25%. O trabalho desenvolvido no âmbito da disciplina de Projeto consistiu na escolha, caracterização e utilização em medições reais de um sensor de qualidade do ar, integrado num equipamento protótipo desenvolvido com esse fim, visando obtenção uma estimativa da incerteza de medição associada à utilização deste dispositivo através da aplicação da metodologia de demonstração de equivalência de métodos de medição de qualidade do ar definida pela União Europeia. A pesquisa bibliográfica realizada permitiu constatar que o monóxido de carbono é neste momento o parâmetro de qualidade do ar que permite ser medido de forma mais exata através da utilização de sensores, nomeadamente o sensor eletroquímico da marca Alphasense, modelo COB4, amplamente utilizado em projetos de desenvolvimento neste cotexto de monitorização ambiental. O sensor foi integrado num sistema de medição com o objetivo de poder ser utlizado em condições de autonomia de fornecimento de energia elétrica, aquisição interna dos dados, tendo em consideração ser o mais pequeno possível e de baixo custo. Foi utlizado um sistema baseado na placa Arduino Uno com gravação de dados em cartão de memória SD, baterias e painel solar, permitindo para além do registo das tensões elétricas do sensor, a obtenção dos valores de temperatura, humidade relativa e pressão atmosférica, com um custo global a rondar os 300 euros. Numa primeira fase foram executados um conjunto de testes laboratoriais que permitiram a determinação de várias características de desempenho em dois sensores iguais: tempo de resposta, a equação modelo do sensor, avaliação da repetibilidade, desvio de curto e longo termo, interferência da temperatura e histerese. Os resultados demonstraram um comportamento dos sensores muito linear, com um tempo de resposta inferior a um minuto e com uma equação modelo do sensor dependente da variação da temperatura. A estimativa da incerteza expandida laboratorial ficou, para ambos os sensores, abaixo dos 10%. Após a realização de duas campanhas reais de medição de CO em que os valores foram muito baixos, foi realizada uma campanha de quinze dias num parque de estacionamento subterrâneo que permitiu a obtenção de concentrações suficientemente elevadas e a comparação dos resultados dos sensores com o método de referência em toda a gama de medição (0 a 12 mol.mol-1). Os valores de concentração obtidos pelos dois sensores demonstraram uma excelente correlação com o método de referência (r2≥0,998), obtendo-se resultados para a estimativa da incerteza expandida de campo inferiores aos obtidos para a incerteza laboratorial, cumprindo o objetivo de qualidade de dados definido para as medições indicativas de incerteza expandida máxima de 25%. Os resultados observados durante o trabalho realizado permitiram confirmar o bom desempenho que este tipo de sensor pode ter no âmbito de medições de poluição do ar com um caracter mais indicativo.
Fuzzy Monte Carlo mathematical model for load curtailment minimization in transmission power systems
Resumo:
This paper presents a methodology which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states by Monte Carlo simulation. This is followed by a remedial action algorithm, based on optimal power flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. In order to illustrate the application of the proposed methodology to a practical case, the paper will include a case study for the Reliability Test System (RTS) 1996 IEEE 24 BUS.
Resumo:
This thesis presents the Fuzzy Monte Carlo Model for Transmission Power Systems Reliability based studies (FMC-TRel) methodology, which is based on statistical failure and repair data of the transmission power system components and uses fuzzyprobabilistic modeling for system component outage parameters. Using statistical records allows developing the fuzzy membership functions of system component outage parameters. The proposed hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. A network contingency analysis to identify any overloading or voltage violation in the network is performed once obtained the system states. This is followed by a remedial action algorithm, based on Optimal Power Flow, to reschedule generations and alleviate constraint violations and, at the same time, to avoid any load curtailment, if possible, or, otherwise, to minimize the total load curtailment, for the states identified by the contingency analysis. For the system states that cause load curtailment, an optimization approach is applied to reduce the probability of occurrence of these states while minimizing the costs to achieve that reduction. This methodology is of most importance for supporting the transmission system operator decision making, namely in the identification of critical components and in the planning of future investments in the transmission power system. A case study based on Reliability Test System (RTS) 1996 IEEE 24 Bus is presented to illustrate with detail the application of the proposed methodology.
Resumo:
Moving towards autonomous operation and management of increasingly complex open distributed real-time systems poses very significant challenges. This is particularly true when reaction to events must be done in a timely and predictable manner while guaranteeing Quality of Service (QoS) constraints imposed by users, the environment, or applications. In these scenarios, the system should be able to maintain a global feasible QoS level while allowing individual nodes to autonomously adapt under different constraints of resource availability and input quality. This paper shows how decentralised coordination of a group of autonomous interdependent nodes can emerge with little communication, based on the robust self-organising principles of feedback. Positive feedback is used to reinforce the selection of the new desired global service solution, while negative feedback discourages nodes to act in a greedy fashion as this adversely impacts on the provided service levels at neighbouring nodes. The proposed protocol is general enough to be used in a wide range of scenarios characterised by a high degree of openness and dynamism where coordination tasks need to be time dependent. As the reported results demonstrate, it requires less messages to be exchanged and it is faster to achieve a globally acceptable near-optimal solution than other available approaches.
Resumo:
Many-core platforms are an emerging technology in the real-time embedded domain. These devices offer various options for power savings, cost reductions and contribute to the overall system flexibility, however, issues such as unpredictability, scalability and analysis pessimism are serious challenges to their integration into the aforementioned area. The focus of this work is on many-core platforms using a limited migrative model (LMM). LMM is an approach based on the fundamental concepts of the multi-kernel paradigm, which is a promising step towards scalable and predictable many-cores. In this work, we formulate the problem of real-time application mapping on a many-core platform using LMM, and propose a three-stage method to solve it. An extended version of the existing analysis is used to assure that derived mappings (i) guarantee the fulfilment of timing constraints posed on worst-case communication delays of individual applications, and (ii) provide an environment to perform load balancing for e.g. energy/thermal management, fault tolerance and/or performance reasons.
Resumo:
The 30th ACM/SIGAPP Symposium On Applied Computing (SAC 2015). 13 to 17, Apr, 2015, Embedded Systems. Salamanca, Spain.
Resumo:
3rd Workshop on High-performance and Real-time Embedded Systems (HIRES 2015). 21, Jan, 2015. Amsterdam, Netherlands.
Resumo:
23rd International Conference on Real-Time Networks and Systems (RTNS 2015). 4 to 6, Nov, 2015, Main Track. Lille, France. Best Paper Award Nominee
Resumo:
Presented at IEEE Real-Time Systems Symposium (RTSS 2015). 1 to 4, Dec, 2015. San Antonio, U.S.A..
Resumo:
O tecido adiposo é um órgão endócrino dinâmico, secretando factores importantes na regulação do metabolismo, fluxo vascular sanguíneo e linfático, e função imunológica, entre outros. Em caso de acumulação de tecido adiposo por ingestão de uma dieta gorda, ou por disfunção metabólica, os adipócitos podem desencadear uma reacção inflamatória por falha na drenagem linfática, acumulando-se mediadores inflamatórios, os quais potenciam a propagação da reacção. Assim, questiona-se uma potencial associação entre o aumento de tecido adiposo na obesidade, hipóxia adipocitária e estimulação da linfangiogénese. Além disso, a expressão de adipocinas varia de acordo com a distribuição do tecido adiposo (subcutâneo, TAS e visceral, TAV). Deste modo, pretende-se com este estudo contribuir para o aumento do conhecimento sobre os complexos mecanismos moleculares subjacentes à linfangiogénese. Ensaios com ratinhos da estirpe C57Bl/6J (modelo de obesidade) e BALB/c (modelo de asma e obesidade), divididos em grupos submetidos a dieta normal e dieta rica em gordura. Avaliação semi-quantitativa da expressão tecidular de LYVE-1 (marcador da linfangiogénese) por imunohistoquímica em material embebido em parafina, no TAS e TAV, e cromatografia líquida de ultra-performance acoplada de espectrometria de massa (UPLC-MS) para análise da expressão plasmática de ceramida e esfingosina-1-fosfato (S1P). No modelo de obesidade observou- -se diminuição do número de vasos linfáticos e expressão de LYVE-1 ao longo do tempo no TAV, e aumento de ambos os parâmetros e hipertrofia adipocitária no TAS. As concentrações de ceramida e S1P corroboram a existência de um processo inflamatório nos ratinhos em estudo, ainda que numa fase muito inicial. No modelo de asma e obesidade, após 17 semanas de tratamento, observou-se incremento da linfangiogénese no TAV, mas não no TAS. A resposta inflamatória avaliada através dos diferentes parâmetros permite afirmar que num estadio inicial de obesidade a proliferação linfática poderá estar a ser retardada pela hipertrofia adipocitária. A libertação de adipocinas será observada apenas numa fase posterior, desencadeando todo o processo inflamatório que incrementará a proliferação linfática. Adicionalmente, é possível sugerir que a maior pressão à qual o TAV se encontra sujeito não favorece a proliferação linfática, pelo menos num estadio incial.
Resumo:
Dissertação para obtenção do Grau de Mestre em Contabilidade e Finanças Orientador: Mestre Paulino Manuel Leite da Silva
Resumo:
Value has been defined in different theoretical contexts as need, desire, interest, standard /criteria, beliefs, attitudes, and preferences. The creation of value is key to any business, and any business activity is about exchanging some tangible and/or intangible good or service and having its value accepted and rewarded by customers or clients, either inside the enterprise or collaborative network or outside. “Perhaps surprising then is that firms often do not know how to define value, or how to measure it” (Anderson and Narus, 1998 cited by [1]). Woodruff echoed that we need “richer customer value theory” for providing an “important tool for locking onto the critical things that managers need to know”. In addition, he emphasized, “we need customer value theory that delves deeply into customer’s world of product use in their situations” [2]. In this sense, we proposed and validated a novel “Conceptual Model for Decomposing the Value for the Customer”. To this end, we were aware that time has a direct impact on customer perceived value, and the suppliers’ and customers’ perceptions change from the pre-purchase to the post-purchase phases, causing some uncertainty and doubts.We wanted to break down value into all its components, as well as every built and used assets (both endogenous and/or exogenous perspectives). This component analysis was then transposed into a mathematical formulation using the Fuzzy Analytic Hierarchy Process (AHP), so that the uncertainty and vagueness of value perceptions could be embedded in this model that relates used and built assets in the tangible and intangible deliverable exchange among the involved parties, with their actual value perceptions.
Resumo:
This paper presents a methodology for distribution networks reconfiguration in outage presence in order to choose the reconfiguration that presents the lower power losses. The methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. Once obtained the system states by Monte Carlo simulation, a logical programming algorithm is applied to get all possible reconfigurations for every system state. In order to evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation a distribution power flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology to a practical case, the paper includes a case study that considers a real distribution network.
Resumo:
Involving groups in important management processes such as decision making has several advantages. By discussing and combining ideas, counter ideas, critical opinions, identified constraints, and alternatives, a group of individuals can test potentially better solutions, sometimes in the form of new products, services, and plans. In the past few decades, operations research, AI, and computer science have had tremendous success creating software systems that can achieve optimal solutions, even for complex problems. The only drawback is that people don’t always agree with these solutions. Sometimes this dissatisfaction is due to an incorrect parameterization of the problem. Nevertheless, the reasons people don’t like a solution might not be quantifiable, because those reasons are often based on aspects such as emotion, mood, and personality. At the same time, monolithic individual decisionsupport systems centered on optimizing solutions are being replaced by collaborative systems and group decision-support systems (GDSSs) that focus more on establishing connections between people in organizations. These systems follow a kind of social paradigm. Combining both optimization- and socialcentered approaches is a topic of current research. However, even if such a hybrid approach can be developed, it will still miss an essential point: the emotional nature of group participants in decision-making tasks. We’ve developed a context-aware emotion based model to design intelligent agents for group decision-making processes. To evaluate this model, we’ve incorporated it in an agent-based simulator called ABS4GD (Agent-Based Simulation for Group Decision), which we developed. This multiagent simulator considers emotion- and argument based factors while supporting group decision-making processes. Experiments show that agents endowed with emotional awareness achieve agreements more quickly than those without such awareness. Hence, participant agents that integrate emotional factors in their judgments can be more successful because, in exchanging arguments with other agents, they consider the emotional nature of group decision making.