903 resultados para Variable splitting augmented Lagrangian
Resumo:
Multiprocessors, particularly in the form of multicores, are becoming standard building blocks for executing reliable software. But their use for applications with hard real-time requirements is non-trivial. Well-known realtime scheduling algorithms in the uniprocessor context (Rate-Monotonic [1] or Earliest-Deadline-First [1]) do not perform well on multiprocessors. For this reason the scientific community in the area of real-time systems has produced new algorithms specifically for multiprocessors. In the meanwhile, a proposal [2] exists for extending the Ada language with new basic constructs which can be used for implementing new algorithms for real-time scheduling; the family of task splitting algorithms is one of them which was emphasized in the proposal [2]. Consequently, assessing whether existing task splitting multiprocessor scheduling algorithms can be implemented with these constructs is paramount. In this paper we present a list of state-of-art task-splitting multiprocessor scheduling algorithms and, for each of them, we present detailed Ada code that uses the new constructs.
Resumo:
Aiming for teaching/learning support in sciences and engineering areas, the Remote Experimentation concept (an E-learning subset) has grown in last years with the development of several infrastructures that enable doing practical experiments from anywhere and anytime, using a simple PC connected to the Internet. Nevertheless, given its valuable contribution to the teaching/learning process, the development of more infrastructures should continue, in order to make available more solutions able to improve courseware contents and motivate students for learning. The work presented in this paper contributes for that purpose, in the specific area of industrial automation. After a brief introduction to the Remote Experimentation concept, we describe a remote accessible lab infrastructure that enables users to conduct real experiments with an important and widely used transducer in industrial automation, named Linear Variable Differential Transformer.
Resumo:
In this paper we develop an appropriate theory of positive definite functions on the complex plane from first principles and show some consequences of positive definiteness for meromorphic functions.
Resumo:
The study of transient dynamical phenomena near bifurcation thresholds has attracted the interest of many researchers due to the relevance of bifurcations in different physical or biological systems. In the context of saddle-node bifurcations, where two or more fixed points collide annihilating each other, it is known that the dynamics can suffer the so-called delayed transition. This phenomenon emerges when the system spends a lot of time before reaching the remaining stable equilibrium, found after the bifurcation, because of the presence of a saddle-remnant in phase space. Some works have analytically tackled this phenomenon, especially in time-continuous dynamical systems, showing that the time delay, tau, scales according to an inverse square-root power law, tau similar to (mu-mu (c) )(-1/2), as the bifurcation parameter mu, is driven further away from its critical value, mu (c) . In this work, we first characterize analytically this scaling law using complex variable techniques for a family of one-dimensional maps, called the normal form for the saddle-node bifurcation. We then apply our general analytic results to a single-species ecological model with harvesting given by a unimodal map, characterizing the delayed transition and the scaling law arising due to the constant of harvesting. For both analyzed systems, we show that the numerical results are in perfect agreement with the analytical solutions we are providing. The procedure presented in this work can be used to characterize the scaling laws of one-dimensional discrete dynamical systems with saddle-node bifurcations.
Resumo:
We have conducted a P and S receiver functions [PRFs and SRFs] analysis for 19 seismic stations on the Iberia and western Mediterranean. In the transition zone [TZ] the PRFs analysis reveals a band [from Gibraltar to Balearic] increased by 10-20 km relative to the standard 250 km. The TZ thickness variations are strongly correlated with the P660s times in PRFs. We interpret the variable depth of the 660-km discontinuity as an effect of subduction. Over the anomalous TZ we found a reduced velocity zone in the upper mantle. Joint inversion of PRFs and SRFs reveals a subcrustal high S velocity lid and an underlying LVZ. A reduction of the S velocity in the LVZ is less than 10%. The Gutenberg discontinuity is located at 65±5 km, but in several models sampling the Mediterranean, the lid is missing or its thickness is reduced to ~30 km. In the Gibraltar and North Africa this boundary is located at ~100 km. The lid Vp/Vs beneath Betics is reduced relative to the standard 1.8. Another evidence of the Vp/Vs anomaly is provided by S410p phase late arrivals in the SRFs. The azimuthal anisotropy analysis with a new technology was conducted at 5 stations and at 2 groups of stations. The fast direction in the uppermost mantle layer is ~90º in Iberian Massif. In Balearic is in the azimuth of ~120º. At a depth of ~60 km the direction becomes 90º. Anisotropy in the upper layer can be interpreted as frozen, whereas anisotropy in the lower layer is active, corresponding to the present-day or recent flow. The effect of the asthenosphere in the SKS splitting is much larger than the effect of the subcrustal lithosphere.
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 2713 – 2716, Seattle, EUA
Resumo:
This paper is on an onshore variable speed wind turbine with doubly fed induction generator and under supervisory control. The control architecture is equipped with an event-based supervisor for the supervision level and fuzzy proportional integral or discrete adaptive linear quadratic as proposed controllers for the execution level. The supervisory control assesses the operational state of the variable speed wind turbine and sends the state to the execution level. Controllers operation are in the full load region to extract energy at full power from the wind while ensuring safety conditions required to inject the energy into the electric grid. A comparison between the simulations of the proposed controllers with the inclusion of the supervisory control on the variable speed wind turbine benchmark model is presented to assess advantages of these controls. (C) 2015 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Resumo:
Energy efficiency plays an important role to the CO2 emissions reduction, combating climate change and improving the competitiveness of the economy. The problem presented here is related to the use of stand-alone diesel gen-sets and its high specific fuel consumptions when operates at low loads. The variable speed gen-set concept is explained as an energy-saving solution to improve this system efficiency. This paper details how an optimum fuel consumption trajectory based on experimentally Diesel engine power map is obtained.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.
Resumo:
En la actualidad, el cambio climático es uno de los temas de mayor preocupación para la población mundial y los científicos de todo el mundo. Debido al crecimiento de la población de forma exponencial, la demanda de energía aumenta acorde con ello, por lo que las actividades de producción energética aumentan consecuentemente, siendo éstas las principales causantes de la aceleración del cambio climático. Pese a que muchos países previamente habían apostado por la producción energética mediante tecnologías limpias a partir de energías renovables, hoy en día es imposible prescindir de los combustibles fósiles pues, junto a la energía nuclear, suponen el mayor porcentaje dentro del mix energético de los países más grandes del mundo, por lo que el cambio debe ser global y con todos los países implicados al unísono. Por ello, los países desarrollados decidieron acordar una serie de leyes y normas para la regulación y el control de la expansión energética en el mundo, mediante programas de incentivo a las empresas para la producción de energía limpia, libre de emisiones, sustituyendo y mejorando los procesos tecnológicos para que garanticen un desarrollo sostenible. De esta forma, se conseguiría también reducir la dependencia energética de los países productores de los recursos fósiles más importantes y a su vez, ayudar a otros sectores a diversificar su negocio y mejorar así la economía de las áreas colindantes a las centrales de producción térmica. Gracias a estos programas de incentivo o, también llamados mecanismos de flexibilidad, las empresas productoras de energía, al acometer inversiones en tecnologia limpia, dejan de emitir gases de efecto invernadero a la atmósfera. Por tanto, gracias al comercio de emisiones y al mercado voluntario, las empresas pueden vender dichas emisiones aumentando la rentabilidad de sus proyectos, haciendo más atractivo de por sí el hecho de invertir en tecnología limpia. En el proyecto desarrollado, se podrá comprobar de una forma más extensa todo lo anteriormente citado. Para ello, se desarrollará una herramienta de cálculo que nos permitirá analizar los beneficios obtenidos por la sustitución de un combustible fósil, no renovable, por otro renovable y sostenible, como es la biomasa. En esta herramienta se calcularán, de forma estimada, las reducciones de las emisiones de CO2 que supone dicha sustitución y se hallará, en función del valor de las cotizaciones de los bonos de carbono en los diferentes mercados, cuál será el beneficio económico obtenido por la venta de las emisiones no emitidas que supone esta sustitución. Por último, dicho beneficio será insertado en un balance económico de la central donde se tendrán en cuenta otras variables como el precio del combustible o las fluctuaciones del precio de la electricidad, para hallar finalmente la rentabilidad que supondría la inversión de esta adaptación en la central. Con el fin de complementar y aplicar la herramienta de cálculo, se analizarán dos casos prácticos de una central de carbón, en los cuales se decide su suscripción dentro del contexto de los mecanismos de flexibilidad creados en los acuerdos internacionales.
Resumo:
Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá
Resumo:
Dissertação para obtenção do Grau de Mestrado em Engenharia de Informática
Resumo:
Project work presented as a partial requirement to obtain a Master Degree in Information Management
Resumo:
The widespread use of mobile devices has made known to the general public new areas that were hitherto confined to specialized devices. In general, the smartphone came to give all users the ability to execute multiple tasks, and among them, take photographs using the integrated cameras. Although these devices are continuously receiving improved cameras, their manufacturers do not take advantage of their full potential, since the operating systems normally offer simple APIs and applications for shooting. Therefore, taking advantage of this environment for mobile devices, we find ourselves in the best scenario to develop applications that help the user obtaining a good result when shooting. In an attempt to provide a set of techniques and tools more applied to the task, this dissertation presents, as a contribution, a set of tools for mobile devices that provides information in real-time on the composition of the scene before capturing an image. Thus, the proposed solution gives support to a user while capturing a scene with a mobile device. The user will be able to receive multiple suggestions on the composition of the scene, which will be based on rules of photography or other useful tools for photographers. The tools include horizon detection and graphical visualization of the color palette presented on the scenario being photographed. These tools were evaluated regarding the mobile device implementation and how users assess their usefulness.
Resumo:
We report a severe case of diarrhea in a 62-year-old female HIV-negative patient from whom Giardia lamblia and Isospora belli were isolated. Because unusual and opportunistic infections should be considered as criteria for further analysis of immunological status, laboratory investigations led to a diagnosis of common variable immunodeficiency (CVID). This is the first reported case of isosporiasis in a patient with CVID and illustrates the importance of being aware of a possible link, particularly in relation to primary immunodeficiency.