38 resultados para Scheduled caste
em Instituto Politécnico do Porto, Portugal
Resumo:
The concept of demand response has a growing importance in the context of the future power systems. Demand response can be seen as a resource like distributed generation, storage, electric vehicles, etc. All these resources require the existence of an infrastructure able to give players the means to operate and use them in an efficient way. This infrastructure implements in practice the smart grid concept, and should accommodate a large number of diverse types of players in the context of a competitive business environment. In this paper, demand response is optimally scheduled jointly with other resources such as distributed generation units and the energy provided by the electricity market, minimizing the operation costs from the point of view of a virtual power player, who manages these resources and supplies the aggregated consumers. The optimal schedule is obtained using two approaches based on particle swarm optimization (with and without mutation) which are compared with a deterministic approach that is used as a reference methodology. A case study with two scenarios implemented in DemSi, a demand Response simulator developed by the authors, evidences the advantages of the use of the proposed particle swarm approaches.
Resumo:
Currently, power systems (PS) already accommodate a substantial penetration of distributed generation (DG) and operate in competitive environments. In the future, as the result of the liberalisation and political regulations, PS will have to deal with large-scale integration of DG and other distributed energy resources (DER), such as storage and provide market agents to ensure a flexible and secure operation. This cannot be done with the traditional PS operational tools used today like the quite restricted information systems Supervisory Control and Data Acquisition (SCADA) [1]. The trend to use the local generation in the active operation of the power system requires new solutions for data management system. The relevant standards have been developed separately in the last few years so there is a need to unify them in order to receive a common and interoperable solution. For the distribution operation the CIM models described in the IEC 61968/70 are especially relevant. In Europe dispersed and renewable energy resources (D&RER) are mostly operated without remote control mechanisms and feed the maximal amount of available power into the grid. To improve the network operation performance the idea of virtual power plants (VPP) will become a reality. In the future power generation of D&RER will be scheduled with a high accuracy. In order to realize VPP decentralized energy management, communication facilities are needed that have standardized interfaces and protocols. IEC 61850 is suitable to serve as a general standard for all communication tasks in power systems [2]. The paper deals with international activities and experiences in the implementation of a new data management and communication concept in the distribution system. The difficulties in the coordination of the inconsistent developed in parallel communication and data management standards - are first addressed in the paper. The upcoming unification work taking into account the growing role of D&RER in the PS is shown. It is possible to overcome the lag in current practical experiences using new tools for creating and maintenance the CIM data and simulation of the IEC 61850 protocol – the prototype of which is presented in the paper –. The origin and the accuracy of the data requirements depend on the data use (e.g. operation or planning) so some remarks concerning the definition of the digital interface incorporated in the merging unit idea from the power utility point of view are presented in the paper too. To summarize some required future work has been identified.
Resumo:
Dissertação de Mestrado apresentada ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Auditoria, sob orientação do Profº Especialista Carlos Quelhas Martins
Resumo:
O trabalho presente nesta dissertação incidiu sobre a aplicação das metodologias Lean no âmbito da manutenção de uma empresa metalomecânica de produção de Moldes – Simoldes Aços. No atual enquadramento, com os mercados nacionais e internacionais debaixo de feroz competição, as empresas são obrigadas a estudar métodos e técnicas que permitam eliminar desperdícios, reduzir custos e tempos de produção, ao mesmo tempo que são exigidos maiores níveis de qualidade dos produtos fabricados com vista ao aumento da competitividade. Sendo a Manutenção uma área funcional com um impacto elevado no desempenho da produção, é percebido que o desempenho desta, tem influência direta no comportamento do fluxo produtivo e nos respetivos níveis de eficácia e eficiência. No decorrer do trabalho desta dissertação de mestrado foi realizada uma análise abrangente do estado atual do sector de atividade de manutenção na empresa SIMOLDES SA, o que permitiu identificar as áreas e os pontos a intervir e desenhar as soluções de melhoria na atividade de manutenção. Na fase concludente do trabalho foram implementadas algumas dessas propostas de melhoria, ao passo que outras ficaram agendadas para futura implementação. Na base do trabalho desenvolvido esteve a metodologia Lean, que apresenta um papel relevante na implementação de uma abordagem integrada da função manutenção na manutenção dos objetivos da produção. O presente projeto baseou a sua estratégia de implementação na aplicação da ferramenta do 5S’ em paralelo com o TPM (Total Productive Maintenance). Ambas as ferramentas visam a redução de desperdícios e o aumento da fiabilidade dos processos, pelo aumento da disponibilidade dos equipamentos, da melhoria do desempenho dos processos e da plena integração de todos os colaboradores no processo de fabrico. Com a implementação das melhorias propostas, foram observados melhorias significativas no fluxo das atividades da manutenção, assim como uma maior visibilidade das mesmas em todo o processo produtivo.
Resumo:
Este trabalho é uma análise dos efeitos da implementação das últimas recomendações do Basel Committee on Banking Supervision (BCBS) também conhecidas como o Basel III de 2010 que deverão ser faseadamente implementadas desde 1 de Janeiro de 2013 até 1 de Janeiro de 2019, no capital próprio dos bancos Portugueses. Neste trabalho assume-se que os ativos pesados pelo risco de 2012 mantêm-se constantes e o capital terá de ser aumentado segundo as recomendações ano após ano até ao fim de 2018. Com esta análise, pretende-se entender o nível de robustez do capital próprio dos bancos Portugueses e se os mesmos têm capital e reservas suficientes para satisfazer as recomendações de capital mínimo sugeridas pelo BCBS ou caso contrário, se necessitarão de novas injeções de capital ou terão de reduzir a sua atividade económica. O Basel III ainda não foi implementado em Portugal, pois a União Europeia está no processo de desenvolvimento e implementação do Credit Requirement Directive IV (CRD IV) que é uma recomendação que todos os bancos centrais dos países da zona Euro deverão impor aos respetivos bancos. Esta diretiva da União Europeia é baseada totalmente nas recomendações do Basel III e deverá ser implementada em 2014 ou nos anos seguintes. Até agora, os bancos Portugueses seguem um sistema com base no aviso 6/2010 do Banco de Portugal que recomenda o cálculo dos rácios core tier 1, tier 1 e tier 2 usando o método notações internas (IRB) de avaliação da exposição do banco aos riscos de crédito, operacional, etc. e onde os ativos ponderados pelo risco são calculados como 12,5 vezes o valor dos requisitos totais de fundos calculados pelo banco. Este método é baseado nas recomendações do Basel II que serão substituídas pelo Basel III. Dado que um dos principais motivos para a crise económica e financeira que assolou o mundo em 2007 foi a acumulação de alavancagem excessiva e gradual erosão da qualidade da base do capital próprio dos bancos, é importante analisar a posição dos bancos Portugueses, que embora não sejam muito grandes a nível global, controlam a economia do país. Espera-se que com a implementação das recomendações do Basel III não haja no futuro uma repetição dos choques sistémicos de 2007. Os resultados deste estudo usando o método padrão recomendado pelo BCBS mostram que de catorze bancos Portugueses incluídos neste estudo, apenas seis (BES, Montepio, Finantia, BIG, Invest e BIC) conseguem enquadrar nas recomendações mínimas do Basel III até 1-1- 2019 e alguns outros estão marginalmente abaixo dos rácios mínimos (CGD, Itaú e Crédito Agrícola).
Resumo:
A preliminary version of this paper appeared in Proceedings of the 31st IEEE Real-Time Systems Symposium, 2010, pp. 239–248.
Resumo:
The use of multicores is becoming widespread inthe field of embedded systems, many of which have real-time requirements. Hence, ensuring that real-time applications meet their timing constraints is a pre-requisite before deploying them on these systems. This necessitates the consideration of the impact of the contention due to shared lowlevel hardware resources like the front-side bus (FSB) on the Worst-CaseExecution Time (WCET) of the tasks. Towards this aim, this paper proposes a method to determine an upper bound on the number of bus requests that tasks executing on a core can generate in a given time interval. We show that our method yields tighter upper bounds in comparison with the state of-the-art. We then apply our method to compute the extra contention delay incurred by tasks, when they are co-scheduled on different cores and access the shared main memory, using a shared bus, access to which is granted using a round-robin arbitration (RR) protocol.
Resumo:
Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.
Resumo:
Known algorithms capable of scheduling implicit-deadline sporadic tasks over identical processors at up to 100% utilisation invariably involve numerous preemptions and migrations. To the challenge of devising a scheduling scheme with as few preemptions and migrations as possible, for a given guaranteed utilisation bound, we respond with the algorithm NPS-F. It is configurable with a parameter, trading off guaranteed schedulable utilisation (up to 100%) vs preemptions. For any possible configuration, NPS-F introduces fewer preemptions than any other known algorithm matching its utilisation bound. A clustered variant of the algorithm, for systems made of multicore chips, eliminates (costly) off-chip task migrations, by dividing processors into disjoint clusters, formed by cores on the same chip (with the cluster size being a parameter). Clusters are independently scheduled (each, using non-clustered NPS-F). The utilisation bound is only moderately affected. We also formulate an important extension (applicable to both clustered and non-clustered NPS-F) which optimises the supply of processing time to executing tasks and makes it more granular. This reduces processing capacity requirements for schedulability without increasing preemptions.
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
The usage of COTS-based multicores is becoming widespread in the field of embedded systems. Providing realtime guarantees at design-time is a pre-requisite to deploy real-time systems on these multicores. This necessitates the consideration of the impact of the contention due to shared low-level hardware resources on the Worst-Case Execution Time (WCET) of the tasks. As a step towards this aim, this paper first identifies the different factors that make the WCET analysis a challenging problem in a typical COTS-based multicore system. Then, we propose and prove, a mathematically correct method to determine tight upper bounds on the WCET of the tasks, when they are co-scheduled on different cores.
Resumo:
The current industry trend is towards using Commercially available Off-The-Shelf (COTS) based multicores for developing real time embedded systems, as opposed to the usage of custom-made hardware. In typical implementation of such COTS-based multicores, multiple cores access the main memory via a shared bus. This often leads to contention on this shared channel, which results in an increase of the response time of the tasks. Analyzing this increased response time, considering the contention on the shared bus, is challenging on COTS-based systems mainly because bus arbitration protocols are often undocumented and the exact instants at which the shared bus is accessed by tasks are not explicitly controlled by the operating system scheduler; they are instead a result of cache misses. This paper makes three contributions towards analyzing tasks scheduled on COTS-based multicores. Firstly, we describe a method to model the memory access patterns of a task. Secondly, we apply this model to analyze the worst case response time for a set of tasks. Although the required parameters to obtain the request profile can be obtained by static analysis, we provide an alternative method to experimentally obtain them by using performance monitoring counters (PMCs). We also compare our work against an existing approach and show that our approach outperforms it by providing tighter upper-bound on the number of bus requests generated by a task.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a heterogeneous multiprocessor platform. We use an algorithm proposed in [1] (we refer to it as LP-EE) from state-of-the-art for assigning tasks to heterogeneous multiprocessor platform and (re-)prove its performance guarantee but for a stronger adversary.We conjecture that if a task set can be scheduled to meet deadlines on a heterogeneous multiprocessor platform by an optimal task assignment scheme that allows task migrations then LP-EE meets deadlines as well with no migrations if given processors twice as fast. We illustrate this with an example.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a heterogeneous multiprocessor platform. We consider a restricted case where the maximum utilization of any task on any processor in the system is no greater than one. We use an algorithm proposed in [1] (we refer to it as LP-EE) from state-of-the-art for assigning tasks to heterogeneous multiprocessor platform and (re-)prove its performance guarantee for this restricted case but for a stronger adversary. We show that if a task set can be scheduled to meet deadlines on a heterogeneous multiprocessor platform by an optimal task assignment scheme that allows task migrations then LP-EE meets deadlines as well with no migrations if given processors twice as fast.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a two-type heterogeneous multiprocessor platform. Each processor is either of type-1 or type-2 with each task having different execution time on each processor type. Jobs can migrate between processors of same type (referred to as intra-type migration) but cannot migrate between processors of different types. We present a new scheduling algorithm namely, LP-Relax(THR) which offers a guarantee that if a task set can be scheduled to meet deadlines by an optimal task assignment scheme that allows intra-type migration then LP-Relax(THR) meets deadlines as well with intra-type migration if given processors 1/THR as fast (referred to as speed competitive ratio) where THR <= 2/3.