17 resultados para Epstein and Zin’s recursive utility function
em Instituto Politécnico do Porto, Portugal
Resumo:
The integration of wind power in eletricity generation brings new challenges to unit commitment due to the random nature of wind speed. For this particular optimisation problem, wind uncertainty has been handled in practice by means of conservative stochastic scenario-based optimisation models, or through additional operating reserve settings. However, generation companies may have different attitudes towards operating costs, load curtailment, or waste of wind energy, when considering the risk caused by wind power variability. Therefore, alternative and possibly more adequate approaches should be explored. This work is divided in two main parts. Firstly we survey the main formulations presented in the literature for the integration of wind power in the unit commitment problem (UCP) and present an alternative model for the wind-thermal unit commitment. We make use of the utility theory concepts to develop a multi-criteria stochastic model. The objectives considered are the minimisation of costs, load curtailment and waste of wind energy. Those are represented by individual utility functions and aggregated in a single additive utility function. This last function is adequately linearised leading to a mixed-integer linear program (MILP) model that can be tackled by general-purpose solvers in order to find the most preferred solution. In the second part we discuss the integration of pumped-storage hydro (PSH) units in the UCP with large wind penetration. Those units can provide extra flexibility by using wind energy to pump and store water in the form of potential energy that can be generated after during peak load periods. PSH units are added to the first model, yielding a MILP model with wind-hydro-thermal coordination. Results showed that the proposed methodology is able to reflect the risk profiles of decision makers for both models. By including PSH units, the results are significantly improved.
Resumo:
This paper proposes a particle swarm optimization (PSO) approach to support electricity producers for multiperiod optimal contract allocation. The producer risk preference is stated by a utility function (U) expressing the tradeoff between the expectation and variance of the return. Variance estimation and expected return are based on a forecasted scenario interval determined by a price range forecasting model developed by the authors. A certain confidence level is associated to each forecasted scenario interval. The proposed model makes use of contracts with physical (spot and forward) and financial (options) settlement. PSO performance was evaluated by comparing it with a genetic algorithm-based approach. This model can be used by producers in deregulated electricity markets but can easily be adapted to load serving entities and retailers. Moreover, it can easily be adapted to the use of other type of contracts.
Resumo:
This paper proposes a swarm intelligence long-term hedging tool to support electricity producers in competitive electricity markets. This tool investigates the long-term hedging opportunities available to electric power producers through the use of contracts with physical (spot and forward) and financial (options) settlement. To find the optimal portfolio the producer risk preference is stated by a utility function (U) expressing the trade-off between the expectation and the variance of the return. Variance estimation and the expected return are based on a forecasted scenario interval determined by a long-term price range forecast model, developed by the authors, whose explanation is outside the scope of this paper. The proposed tool makes use of Particle Swarm Optimization (PSO) and its performance has been evaluated by comparing it with a Genetic Algorithm (GA) based approach. To validate the risk management tool a case study, using real price historical data for mainland Spanish market, is presented to demonstrate the effectiveness of the proposed methodology.
Resumo:
Proteins are biochemical entities consisting of one or more blocks typically folded in a 3D pattern. Each block (a polypeptide) is a single linear sequence of amino acids that are biochemically bonded together. The amino acid sequence in a protein is defined by the sequence of a gene or several genes encoded in the DNA-based genetic code. This genetic code typically uses twenty amino acids, but in certain organisms the genetic code can also include two other amino acids. After linking the amino acids during protein synthesis, each amino acid becomes a residue in a protein, which is then chemically modified, ultimately changing and defining the protein function. In this study, the authors analyze the amino acid sequence using alignment-free methods, aiming to identify structural patterns in sets of proteins and in the proteome, without any other previous assumptions. The paper starts by analyzing amino acid sequence data by means of histograms using fixed length amino acid words (tuples). After creating the initial relative frequency histograms, they are transformed and processed in order to generate quantitative results for information extraction and graphical visualization. Selected samples from two reference datasets are used, and results reveal that the proposed method is able to generate relevant outputs in accordance with current scientific knowledge in domains like protein sequence/proteome analysis.
Resumo:
This paper addresses the optimal involvement in derivatives electricity markets of a power producer to hedge against the pool price volatility. To achieve this aim, a swarm intelligence meta-heuristic optimization technique for long-term risk management tool is proposed. This tool investigates the long-term opportunities for risk hedging available for electric power producers through the use of contracts with physical (spot and forward contracts) and financial (options contracts) settlement. The producer risk preference is formulated as a utility function (U) expressing the trade-off between the expectation and the variance of the return. Variance of return and the expectation are based on a forecasted scenario interval determined by a long-term price range forecasting model. This model also makes use of particle swarm optimization (PSO) to find the best parameters allow to achieve better forecasting results. On the other hand, the price estimation depends on load forecasting. This work also presents a regressive long-term load forecast model that make use of PSO to find the best parameters as well as in price estimation. The PSO technique performance has been evaluated by comparison with a Genetic Algorithm (GA) based approach. A case study is presented and the results are discussed taking into account the real price and load historical data from mainland Spanish electricity market demonstrating the effectiveness of the methodology handling this type of problems. Finally, conclusions are dully drawn.
Resumo:
In this paper, we analyze the performance limits of the slotted CSMA/CA mechanism of IEEE 802.15.4 in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent) on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
Resumo:
The IEEE 802.15.4 has been adopted as a communication protocol standard for Low-Rate Wireless Private Area Networks (LRWPANs). While it appears as a promising candidate solution for Wireless Sensor Networks (WSNs), its adequacy must be carefully evaluated. In this paper, we analyze the performance limits of the slotted CSMA/CA medium access control (MAC) mechanism in the beacon-enabled mode for broadcast transmissions in WSNs. The motivation for evaluating the beacon-enabled mode is due to its flexibility and potential for WSN applications as compared to the non-beacon enabled mode. Our analysis is based on an accurate simulation model of the slotted CSMA/CA mechanism on top of a realistic physical layer, with respect to the IEEE 802.15.4 standard specification. The performance of the slotted CSMA/CA is evaluated and analyzed for different network settings to understand the impact of the protocol attributes (superframe order, beacon order and backoff exponent), the number of nodes and the data frame size on the network performance, namely in terms of throughput (S), average delay (D) and probability of success (Ps). We also analytically evaluate the impact of the slotted CSMA/CA overheads on the saturation throughput. We introduce the concept of utility (U) as a combination of two or more metrics, to determine the best offered load range for an optimal behavior of the network. We show that the optimal network performance using slotted CSMA/CA occurs in the range of 35% to 60% with respect to an utility function proportional to the network throughput (S) divided by the average delay (D).
Resumo:
Introdução: neste estudo foi avaliado um caso de lombociatalgia com hérnia discal lombar através de exame clínico confirmado por ressonância magnética. Foi efectuado um tratamento com terapia manual durante 4 semanas até à abolição dos sintomas e recuperação total da função. Objectivos: realizar um tratamento de terapia manual num paciente com lombociatalgia com hérnia discal lombar num período de 1 mês. Métodos: foi realizado um estudo num paciente de 40 anos do sexo masculino que apresentava um quadro de lombociatalgia com hérnia discal lombar. Para a avaliação foram utilizados testes palpatórios com movimento (teste de Mitchell lombar, teste de Gillet sacroilíaco); observação de assimetrias posturais; teste de mobilidade activa em pé; EVA para avaliação da dor; testes de compressão lombar na posição de sentado; testes de condução neurológica; testes neurodinâmicos. Os testes foram aplicados no início e final das 4 primeiras sessões de tratamento com terapia manual, com intervalo de uma semana e um follow-up realizado uma semana após a 4ª sessão. Resultados: após o tratamento de 4 sessões, verificou-se a ausência de postura antálgica. A dor diminuiu de 5/10 para 0/10 (EVA) nos movimentos de flexão, extensão, rotação esquerda e inclinação esquerda do segmento lombar, em pé, com amplitudes normais de mobilidade. A dor diminuiu 3/10 para 0/10 (EVA) no teste de compressão lombar sentado e de 6/10 para 0/10 (EVA) para o teste de compressão lombar sentado com inclinação esquerda. O teste de Mitchell lombar inicial (FRS L5; NSR lombar) ficou negativo assim como o teste de Gillet sacroilíaco; teste de SLR ; SLR + neck flection ; testes de condução de L5 que se encontrava positivo com hipostesia no dermátomo de L5 e grau 3+ de força nos dorsiflexores à esquerda ficou negativo para a hipostesia e grau 5 de força nos dorsiflexores. Conclusão: neste caso o conjunto de técnicas de terapia manual aplicadas juntamente com exercícios terapêuticos, concelhos posturais e de estilos de vida ajudaram o paciente a recuperar totalmente a função e ver abolida a dor.
Resumo:
Introduction Myocardial Perfusion Imaging (MPI) is a very important tool in the assessment of Coronary Artery Disease ( CAD ) patient s and worldwide data demonstrate an increasingly wider use and clinical acceptance. Nevertheless, it is a complex process and it is quite vulnerable concerning the amount and type of possible artefacts, some of them affecting seriously the overall quality and the clinical utility of the obtained data. One of the most in convenient artefacts , but relatively frequent ( 20% of the cases ) , is relate d with patient motion during image acquisition . Mostly, in those situations, specific data is evaluated and a decisi on is made between A) accept the results as they are , consider ing that t he “noise” so introduced does not affect too seriously the final clinical information, or B) to repeat the acquisition process . Another possib ility could be to use the “ Motion Correcti on Software” provided within the software package included in any actual gamma camera. The aim of this study is to compare the quality of the final images , obtained after the application of motion correction software and after the repetition of image acqui sition. Material and Methods Thirty cases of MPI affected by Motion Artefacts and repeated , were used. A group of three, independent (blinded for the differences of origin) expert Nuclear Medicine Clinicians had been invited to evaluate the 30 sets of thre e images - one set for each patient - being ( A) original image , motion uncorrected , (B) original image, motion corrected, and (C) second acquisition image, without motion . The results so obtained were statistically analysed . Results and Conclusion Results obtained demonstrate that the use of the Motion Correction Software is useful essentiall y if the amplitude of movement is not too important (with this specific quantification found hard to define precisely , due to discrepancies between clinicians and other factors , namely between one to another brand); when that is not the case and the amplitude of movement is too important , the n the percentage of agreement between clinicians is much higher and the repetition of the examination is unanimously considered ind ispensable.
Resumo:
The development of new products or processes involves the creation, re-creation and integration of conceptual models from the related scientific and technical domains. Particularly, in the context of collaborative networks of organisations (CNO) (e.g. a multi-partner, international project) such developments can be seriously hindered by conceptual misunderstandings and misalignments, resulting from participants with different backgrounds or organisational cultures, for example. The research described in this article addresses this problem by proposing a method and the tools to support the collaborative development of shared conceptualisations in the context of a collaborative network of organisations. The theoretical model is based on a socio-semantic perspective, while the method is inspired by the conceptual integration theory from the cognitive semantics field. The modelling environment is built upon a semantic wiki platform. The majority of the article is devoted to developing an informal ontology in the context of a European R&D project, studied using action research. The case study results validated the logical structure of the method and showed the utility of the method.
Resumo:
The influence of uncertainties of input parameters on output response of composite structures is investigated in this paper. In particular, the effects of deviations in mechanical properties, ply angles, ply thickness and on applied loads are studied. The uncertainty propagation and the importance measure of input parameters are analysed using three different approaches: a first-order local method, a Global Sensitivity Analysis (GSA) supported by a variance-based method and an extension of local variance to estimate the global variance over the domain of inputs. Sample results are shown for a shell composite laminated structure built with different composite systems including multi-materials. The importance measures of input parameters on structural response based on numerical results are established and discussed as a function of the anisotropy of composite materials. Needs for global variance methods are discussed by comparing the results obtained from different proposed methodologies. The objective of this paper is to contribute for the use of GSA techniques together with low expensive local importance measures.
Resumo:
In practice the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce also vibrations. In order to analyze these phenomena a robot signal acquisition system was developed. The manipulator motion produces vibrations, either from the structural modes or from endeffector impacts. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments, and electrical currents in the motors. Afterwards, an analysis package, running off-line, reads the data recorded by the acquisition system and extracts the signal characteristics. Due to the multiplicity of sensors, the data obtained can be redundant because the same type of information may be seen by two or more sensors. Because of the price of the sensors, this aspect can be considered in order to reduce the cost of the system. On the other hand, the placement of the sensors is an important issue in order to obtain the suitable signals of the vibration phenomenon. Moreover, the study of these issues can help in the design optimization of the acquisition system. In this line of thought a sensor classification scheme is presented. Several authors have addressed the subject of the sensor classification scheme. White (White, 1987) presents a flexible and comprehensive categorizing scheme that is useful for describing and comparing sensors. The author organizes the sensors according to several aspects: measurands, technological aspects, detection means, conversion phenomena, sensor materials and fields of application. Michahelles and Schiele (Michahelles & Schiele, 2003) systematize the use of sensor technology. They identified several dimensions of sensing that represent the sensing goals for physical interaction. A conceptual framework is introduced that allows categorizing existing sensors and evaluates their utility in various applications. This framework not only guides application designers for choosing meaningful sensor subsets, but also can inspire new systems and leads to the evaluation of existing applications. Today’s technology offers a wide variety of sensors. In order to use all the data from the diversity of sensors a framework of integration is needed. Sensor fusion, fuzzy logic, and neural networks are often mentioned when dealing with problem of combing information from several sensors to get a more general picture of a given situation. The study of data fusion has been receiving considerable attention (Esteban et al., 2005; Luo & Kay, 1990). A survey of the state of the art in sensor fusion for robotics can be found in (Hackett & Shah, 1990). Henderson and Shilcrat (Henderson & Shilcrat, 1984) introduced the concept of logic sensor that defines an abstract specification of the sensors to integrate in a multisensor system. The recent developments of micro electro mechanical sensors (MEMS) with unwired communication capabilities allow a sensor network with interesting capacity. This technology was applied in several applications (Arampatzis & Manesis, 2005), including robotics. Cheekiralla and Engels (Cheekiralla & Engels, 2005) propose a classification of the unwired sensor networks according to its functionalities and properties. This paper presents a development of a sensor classification scheme based on the frequency spectrum of the signals and on a statistical metrics. Bearing these ideas in mind, this paper is organized as follows. Section 2 describes briefly the robotic system enhanced with the instrumentation setup. Section 3 presents the experimental results. Finally, section 4 draws the main conclusions and points out future work.
Resumo:
Objectivos: Verificar o comportamento dos Ajustes Posturais Antecipatórios no alcance, em três crianças com hemiparésia espástica, face à aplicação de uma intervenção em Fisioterapia baseada numa abordagem segundo o conceito de Bobath. Pretendeu-se também perceber as modificações ao nível das actividades e participação, bem como ao nível da capacidade de modificação das componentes neuromotoras. Metodologia: A avaliação foi realizada antes e três meses após a intervenção através da electromiografia de superfície, máquina fotográfica Canon EOS, Classificação Internacional de Funcionalidade para Crianças e Jovens (CIF-CJ) e Sistema de Classificação da Função Motora Global (GMFCS). Resultados: Verificou-se melhorias nos Ajustes Posturais Antecipatórios no alcance com os dois membros superiores, nomeadamente no tempo de activação e na duração da actividade muscular, bem como alterações positivas nas componentes neuromotoras, como na sequência de movimento de sentado para de pé, e de pé para sentado, e marcha mais especificamente na fase pendular com o membro inferior predominantemente comprometido. Também se verificou após a intervenção uma maior funcionalidade nas três crianças, verificando-se alterações positivas nos qualificadores nas actividades e participação. Conclusão: A intervenção com base no conceito de Bobath induziu mudanças positivas quanto à funcionalidade das crianças, reflectindo-se nos Ajustes Posturais Antecipatórios e na reorganização de componentes de movimento.
Resumo:
Reconfigurable computing experienced a considerable expansion in the last few years, due in part to the fast run-time partial reconfiguration features offered by recent SRAM-based Field Programmable Gate Arrays (FPGAs), which allowed the implementation in real-time of dynamic resource allocation strategies, with multiple independent functions from different applications sharing the same logic resources in the space and temporal domains. However, when the sequence of reconfigurations to be performed is not predictable, the efficient management of the logic space available becomes the greatest challenge posed to these systems. Resource allocation decisions have to be made concurrently with system operation, taking into account function priorities and optimizing the space currently available. As a consequence of the unpredictability of this allocation procedure, the logic space becomes fragmented, with many small areas of free resources failing to satisfy most requests and so remaining unused. A rearrangement of the currently running functions is therefore necessary, so as to obtain enough contiguous space to implement incoming functions, avoiding the spreading of their components and the resulting degradation of system performance. A novel active relocation procedure for Configurable Logic Blocks (CLBs) is herein presented, able to carry out online rearrangements, defragmenting the available FPGA resources without disturbing functions currently running.
Resumo:
Dissertação de Mestrado em Solicitadoria