17 resultados para Multi Domain Information Model
em Instituto Politécnico do Porto, Portugal
Resumo:
In this paper, we formulate the electricity retailers’ short-term decision-making problem in a liberalized retail market as a multi-objective optimization model. Retailers with light physical assets, such as generation and storage units in the distribution network, are considered. Following advances in smart grid technologies, electricity retailers are becoming able to employ incentive-based demand response (DR) programs in addition to their physical assets to effectively manage the risks of market price and load variations. In this model, the DR scheduling is performed simultaneously with the dispatch of generation and storage units. The ultimate goal is to find the optimal values of the hourly financial incentives offered to the end-users. The proposed model considers the capacity obligations imposed on retailers by the grid operator. The profit seeking retailer also has the objective to minimize the peak demand to avoid the high capacity charges in form of grid tariffs or penalties. The non-dominated sorting genetic algorithm II (NSGA-II) is used to solve the multi-objective problem. It is a fast and elitist multi-objective evolutionary algorithm. A case study is solved to illustrate the efficient performance of the proposed methodology. Simulation results show the effectiveness of the model for designing the incentive-based DR programs and indicate the efficiency of NSGA-II in solving the retailers’ multi-objective problem.
Resumo:
Devido à grande quantidade de dados disponíveis na Internet, um dos maiores desafios no mundo virtual é recomendar informação aos seus utilizadores. Por outro lado, esta grande quantidade de dados pode ser útil para melhorar recomendações se for anotada e interligada por dados de proveniência. Neste trabalho é abordada a temática de recomendação de (alteração de) permissões acesso sobre recursos ao seu proprietário, ao invés da recomendação do próprio recurso a um potencial consumidor/leitor. Para permitir a recomendação de acessos a um determinado recurso, independentemente do domínio onde o mesmo se encontra alojado, é essencial a utilização de sistemas de controlo de acessos distribuídos, mecanismos de rastreamento de recursos e recomendação independentes do domínio. Assim sendo, o principal objectivo desta tese é utilizar informação de rastreamento de acções realizadas sobre recursos (i.e. informação que relaciona recursos e utilizadores através da Web independentemente do domínio de rede) e utiliza-la para permitir a recomendação de privilégios de acesso a esses recursos por outros utilizadores. Ao longo do desenvolvimento da tese resultaram as seguintes contribuições: A análise do estado da arte de recomendação e de sistemas de recomendação potencialmente utilizáveis na recomendação de privilégios (secção 2.3); A análise do estado da arte de mecanismos de rastreamento e proveniência de informação (secção 2.2); A proposta de um sistema de recomendação de privilégios de acesso independente do domínio e a sua integração no sistema de controlo de acessos proposto anteriormente (secção 3.1); Levantamento, análise e especificação da informação relativa a privilégios de acesso, para ser utilizada no sistema de recomendação (secção 2.1); A especificação da informação resultante do rastreamento de acções para ser utilizada na recomendação de privilégios de acesso (secção 4.1.1); A especificação da informação de feedback resultante do sistema de recomendação de acessos e sua reutilização no sistema de recomendação(secção 4.1.3); A especificação, implementação e integração do sistema de recomendação de privilégios de acesso na plataforma já existente (secção 4.2 e secção 4.3); Realização de experiências de avaliação ao sistema de recomendação de privilégios, bem como a análise dos resultados obtidos (secção 5).
Resumo:
The need for better adaptation of networks to transported flows has led to research on new approaches such as content aware networks and network aware applications. In parallel, recent developments of multimedia and content oriented services and applications such as IPTV, video streaming, video on demand, and Internet TV reinforced interest in multicast technologies. IP multicast has not been widely deployed due to interdomain and QoS support problems; therefore, alternative solutions have been investigated. This article proposes a management driven hybrid multicast solution that is multi-domain and media oriented, and combines overlay multicast, IP multicast, and P2P. The architecture is developed in a content aware network and network aware application environment, based on light network virtualization. The multicast trees can be seen as parallel virtual content aware networks, spanning a single or multiple IP domains, customized to the type of content to be transported while fulfilling the quality of service requirements of the service provider.
Resumo:
13th IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC 2015). 21 to 23, Oct, 2015, Session W1-A: Multiprocessing and Multicore Architectures. Porto, Portugal.
Resumo:
O âmbito deste trabalho envolve o teste do modelo BIM numa obra em construção pela Mota-Engil – Engenharia, na extração experimental de peças desenhadas de preparação e apoio à execução de obra. No capítulo 1 deste relatório são definidos o âmbito e os objetivos deste trabalho, é feito um enquadramento histórico do tema e abordados conceitos e atividades da preparação de obra, na sua forma tradicional. O estado do conhecimento da preparação de obras e mais em concreto da tecnologia BIM a nível nacional e internacional é abordado no capítulo 2. Nesse sentido procura-se definir os conceitos principais inerentes a esta nova metodologia, que passa por identificar e caraterizar a tecnologia envolvida e o seu nível de desenvolvimento. Com suporte em casos práticos de preparação de obra na sua forma tradicional, identificados e desenvolvidos no capítulo 3, foi compilado um processo tipo de peças desenhadas de suporte identificadas e caracterizadas no capítulo 4, frequentes e comuns à execução de diversos tipos de obras de edifícios. Assente na compilação baseada em casos práticos e no estudo do projeto de execução da empreitada que sustenta o presente trabalho, com base no qual o modelo BIM foi concebido, identificou-se um conjunto de peças desenhadas de preparação e apoio à execução dos trabalhos, em 2D, a extrair do modelo. No capítulo 5, é feita uma descrição do modo como foi estudado o projeto da obra, com evidência para os fatores mais relevantes, especificando os desenhos a extrair. Suportada pelo programa de modelação ArchiCAD, a extração do conjunto de desenhos identificados anteriormente foi conseguida com recurso às funcionalidades disponíveis no software, que permite a criação de desenhos 2D atualizáveis ou não automaticamente a partir do modelo. Qualquer alteração introduzida no modelo virtual é automaticamente atualizada nos desenhos bidimensionais, caso o utilizador assim o pretenda. Ao longo desse trabalho foram detetados e analisados os condicionalismos inerentes ao processo de extração, referidos no capítulo 6, para estabelecimento de regras de modelação padrão a adotar em futuras empreitadas, que possam simplificar a obtenção dos elementos desenhados de preparação necessários à sua execução. No ponto 6.3 são identificadas melhorias a introduzir no modelo. Em conclusão no capítulo 7 são abordadas especificidades do setor da construção que sustentam e evidenciam cada vez mais a necessidade de utilizar as novas tecnologias com vista à adoção de práticas e ferramentas padrão de apoio à execução de obras. Sendo a tecnologia BIM, transversal a todo o setor, a sua utilização com regras padrão na conceção dos modelos e na extração de dados, potencia a otimização dos custos, do tempo, dos recursos e da qualidade final de um empreendimento, ao longo de todo o seu ciclo de vida, para além de apoiar com elevada fiabilidade as tomadas de decisão ao longo desse período. A tecnologia BIM, possibilita a antevisão do edifício a construir com um elevado grau de pormenor, com todas as vantagens que daí advêm.
Resumo:
The integration of wind power in eletricity generation brings new challenges to unit commitment due to the random nature of wind speed. For this particular optimisation problem, wind uncertainty has been handled in practice by means of conservative stochastic scenario-based optimisation models, or through additional operating reserve settings. However, generation companies may have different attitudes towards operating costs, load curtailment, or waste of wind energy, when considering the risk caused by wind power variability. Therefore, alternative and possibly more adequate approaches should be explored. This work is divided in two main parts. Firstly we survey the main formulations presented in the literature for the integration of wind power in the unit commitment problem (UCP) and present an alternative model for the wind-thermal unit commitment. We make use of the utility theory concepts to develop a multi-criteria stochastic model. The objectives considered are the minimisation of costs, load curtailment and waste of wind energy. Those are represented by individual utility functions and aggregated in a single additive utility function. This last function is adequately linearised leading to a mixed-integer linear program (MILP) model that can be tackled by general-purpose solvers in order to find the most preferred solution. In the second part we discuss the integration of pumped-storage hydro (PSH) units in the UCP with large wind penetration. Those units can provide extra flexibility by using wind energy to pump and store water in the form of potential energy that can be generated after during peak load periods. PSH units are added to the first model, yielding a MILP model with wind-hydro-thermal coordination. Results showed that the proposed methodology is able to reflect the risk profiles of decision makers for both models. By including PSH units, the results are significantly improved.
Resumo:
The 30th ACM/SIGAPP Symposium On Applied Computing (SAC 2015). 13 to 17, Apr, 2015, Embedded Systems. Salamanca, Spain.
Resumo:
Many of the most common human functions such as temporal and non-monotonic reasoning have not yet been fully mapped in developed systems, even though some theoretical breakthroughs have already been accomplished. This is mainly due to the inherent computational complexity of the theoretical approaches. In the particular area of fault diagnosis in power systems however, some systems which tried to solve the problem, have been deployed using methodologies such as production rule based expert systems, neural networks, recognition of chronicles, fuzzy expert systems, etc. SPARSE (from the Portuguese acronym, which means expert system for incident analysis and restoration support) was one of the developed systems and, in the sequence of its development, came the need to cope with incomplete and/or incorrect information as well as the traditional problems for power systems fault diagnosis based on SCADA (supervisory control and data acquisition) information retrieval, namely real-time operation, huge amounts of information, etc. This paper presents an architecture for a decision support system, which can solve the presented problems, using a symbiosis of the event calculus and the default reasoning rule based system paradigms, insuring soft real-time operation with incomplete, incorrect or domain incoherent information handling ability. A prototype implementation of this system is already at work in the control centre of the Portuguese Transmission Network.
Resumo:
Knowledge is central to the modern economy and society. Indeed, the knowledge society has transformed the concept of knowledge and is more and more aware of the need to overcome the lack of knowledge when has to make options or address its problems and dilemmas. One’s knowledge is less based on exact facts and more on hypotheses, perceptions or indications. Even when we use new computational artefacts and novel methodologies for problem solving, like the use of Group Decision Support Systems (GDSSs), the question of incomplete information is in most of the situations marginalized. On the other hand, common sense tells us that when a decision is made it is impossible to have a perception of all the information involved and the nature of its intrinsic quality. Therefore, something has to be made in terms of the information available and the process of its evaluation. It is under this framework that a Multi-valued Extended Logic Programming language will be used for knowledge representation and reasoning, leading to a model that embodies the Quality-of-Information (QoI) and its quantification, along the several stages of the decision-making process. In this way, it is possible to provide a measure of the value of the QoI that supports the decision itself. This model will be here presented in the context of a GDSS for VirtualECare, a system aimed at sustaining online healthcare services.
Resumo:
O mercado accionista, de uma forma global, tem-se revelado nos últimos tempos uma das principais fontes de incentivo ao mercado de valores mobiliários. O seu impacto junto do público em geral é enorme e a sua importância para as empresas é vital. Interessa, então, perceber como é que a teoria financeira tem obordado a avaliação e a compreensão do processo de formação de uma cotação. Desde os anos 50 até aos dias de hoje, interessa perceber como é que os diferentes autores têm tratado esta abordagem e quais os resultados deste confronto. Interessa sobretudo perceber o abordogem de Stephen Ross e a teoria do arbitragem. Na sequência desta obordagem e com o aparecimento do Multi Index Model, passou a ser possível extimar com maior precisão a evolução da cotação, na medida em que esta estaria dependente de um vasto conjunto de variavéis, que abragem uma vasta área de influência. O contributo de Ross é por isso decisivo. No final interessa reter a melhor técnica e teoria, que defende os interesses do investidor. Face o isto resta, então, saber qual a melhor técnica estatística para proceder a estes estudos empíricos.
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Resumo:
The latest medical diagnosis devices enable the performance of e-diagnosis making the access to these services easier, faster and available in remote areas. However this imposes new communications and data interchange challenges. In this paper a new XML based format for storing cardiac signals and related information is presented. The proposed structure encompasses data acquisition devices, patient information, data description, pathological diagnosis and waveform annotation. When compared with similar purpose formats several advantages arise. Besides the full integrated data model it may also be noted the available geographical references for e-diagnosis, the multi stream data description, the ability to handle several simultaneous devices, the possibility of independent waveform annotation and a HL7 compliant structure for common contents. These features represent an enhanced integration with existent systems and an improved flexibility for cardiac data representation.
Resumo:
Decentralised co-operative multi-agent systems are computational systems where conflicts are frequent due to the nature of the represented knowledge. Negotiation methodologies, in this case argumentation based negotiation methodologies, were developed and applied to solve unforeseeable and, therefore, unavoidable conflicts. The supporting computational model is a distributed belief revision system where argumentation plays the decisive role of revision. The distributed belief revision system detects, isolates and solves, whenever possible, the identified conflicts. The detection and isolation of the conflicts is automatically performed by the distributed consistency mechanism and the resolution of the conflict, or belief revision, is achieved via argumentation. We propose and describe two argumentation protocols intended to solve different types of identified information conflicts: context dependent and context independent conflicts. While the protocol for context dependent conflicts generates new consensual alternatives, the latter chooses to adopt the soundest, strongest argument presented. The paper shows the suitability of using argumentation as a distributed decentralised belief revision protocol to solve unavoidable conflicts.
Resumo:
Many-core platforms are an emerging technology in the real-time embedded domain. These devices offer various options for power savings, cost reductions and contribute to the overall system flexibility, however, issues such as unpredictability, scalability and analysis pessimism are serious challenges to their integration into the aforementioned area. The focus of this work is on many-core platforms using a limited migrative model (LMM). LMM is an approach based on the fundamental concepts of the multi-kernel paradigm, which is a promising step towards scalable and predictable many-cores. In this work, we formulate the problem of real-time application mapping on a many-core platform using LMM, and propose a three-stage method to solve it. An extended version of the existing analysis is used to assure that derived mappings (i) guarantee the fulfilment of timing constraints posed on worst-case communication delays of individual applications, and (ii) provide an environment to perform load balancing for e.g. energy/thermal management, fault tolerance and/or performance reasons.