979 resultados para program code generation
Resumo:
Sustainable development concerns made renewable energy sources to be increasingly used for electricity distributed generation. However, this is mainly due to incentives or mandatory targets determined by energy policies as in European Union. Assuring a sustainable future requires distributed generation to be able to participate in competitive electricity markets. To get more negotiation power in the market and to get advantages of scale economy, distributed generators can be aggregated giving place to a new concept: the Virtual Power Producer (VPP). VPPs are multi-technology and multisite heterogeneous entities that should adopt organization and management methodologies so that they can make distributed generation a really profitable activity, able to participate in the market. This paper presents ViProd, a simulation tool that allows simulating VPPs operation, in the context of MASCEM, a multi-agent based eletricity market simulator.
Resumo:
This paper presents a new architecture for the MASCEM, a multi-agent electricity market simulator. This is implemented in a Prolog which is integrated in the JAVA program by using the LPA Win-Prolog Intelligence Server (IS) provides a DLL interface between Win-Prolog and other applications. This paper mainly focus on the MASCEM ability to provide the means to model and simulate Virtual Power Producers (VPP). VPPs are represented as a coalition of agents, with specific characteristics and goals. VPPs can reinforce the importance of these generation technologies making them valuable in electricity markets.
Resumo:
Demand response can play a very relevant role in future power systems in which distributed generation can help to assure service continuity in some fault situations. This paper deals with the demand response concept and discusses its use in the context of competitive electricity markets and intensive use of distributed generation. The paper presents DemSi, a demand response simulator that allows studying demand response actions and schemes using a realistic network simulation based on PSCAD. Demand response opportunities are used in an optimized way considering flexible contracts between consumers and suppliers. A case study evidences the advantages of using flexible contracts and optimizing the available generation when there is a lack of supply.
Resumo:
Nowadays, there is a growing environmental concern about were the energy that we use comes from, bringing the att ention on renewable energies. However, the use and trade of renewable e nergies in the market seem to be complicated because of the lack of guara ntees of generation, mainly in the wind farms. The lack of guarantees is usually addressed by using a reserve generation. The aggregation of DG p lants gives place to a new concept: the Virtual Power Producer (VPP). VPPs can reinforce the importance of wind generation technologies, making them valuable in electricity markets. This paper presents some resul ts obtained with a simulation tool (ViProd) developed to support VPPs in the analysis of their operation and management methods and of their strat egies effects.
Resumo:
Distributed generation unlike centralized electrical generation aims to generate electrical energy on small scale as near as possible to load centers, interchanging electric power with the network. This work presents a probabilistic methodology conceived to assist the electric system planning engineers in the selection of the distributed generation location, taking into account the hourly load changes or the daily load cycle. The hourly load centers, for each of the different hourly load scenarios, are calculated deterministically. These location points, properly weighted according to their load magnitude, are used to calculate the best fit probability distribution. This distribution is used to determine the maximum likelihood perimeter of the area where each source distributed generation point should preferably be located by the planning engineers. This takes into account, for example, the availability and the cost of the land lots, which are factors of special relevance in urban areas, as well as several obstacles important for the final selection of the candidates of the distributed generation points. The proposed methodology has been applied to a real case, assuming three different bivariate probability distributions: the Gaussian distribution, a bivariate version of Freund’s exponential distribution and the Weibull probability distribution. The methodology algorithm has been programmed in MATLAB. Results are presented and discussed for the application of the methodology to a realistic case and demonstrate the ability of the proposed methodology for efficiently handling the determination of the best location of the distributed generation and their corresponding distribution networks.
Resumo:
In the last years there has been a considerable increase in the number of people in need of intensive care, especially among the elderly, a phenomenon that is related to population ageing (Brown 2003). However, this is not exclusive of the elderly, as diseases as obesity, diabetes, and blood pressure have been increasing among young adults (Ford and Capewell 2007). As a new fact, it has to be dealt with by the healthcare sector, and particularly by the public one. Thus, the importance of finding new and cost effective ways for healthcare delivery are of particular importance, especially when the patients are not to be detached from their environments (WHO 2004). Following this line of thinking, a VirtualECare Multiagent System is presented in section 2, being our efforts centered on its Group Decision modules (Costa, Neves et al. 2007) (Camarinha-Matos and Afsarmanesh 2001).On the other hand, there has been a growing interest in combining the technological advances in the information society - computing, telecommunications and knowledge – in order to create new methodologies for problem solving, namely those that convey on Group Decision Support Systems (GDSS), based on agent perception. Indeed, the new economy, along with increased competition in today’s complex business environments, takes the companies to seek complementarities, in order to increase competitiveness and reduce risks. Under these scenarios, planning takes a major role in a company life cycle. However, effective planning depends on the generation and analysis of ideas (innovative or not) and, as a result, the idea generation and management processes are crucial. Our objective is to apply the GDSS referred to above to a new area. We believe that the use of GDSS in the healthcare arena will allow professionals to achieve better results in the analysis of one’s Electronically Clinical Profile (ECP). This attainment is vital, regarding the incoming to the market of new drugs and medical practices, which compete in the use of limited resources.
Resumo:
Today, business group decision making is an extremely important activity. A considerable number of applications and research have been made in the past years in order to increase the effectiveness of decision making process. In order to support the idea generation process, IGTAI (Idea Generation Tool for Ambient Intelligence) prototype was created. IGTAI is a Group Decision Support System designed to support any kind of meetings namely distributed, asynchronous or face to face. It aims at helping geographically distributed (or not) people and organizations in the idea generation task, by making use of pervasive hardware in a meeting room, expanding the meeting beyond the room walls by allowing a ubiquitous access through different kinds of equipment. This paper focus on the research made to build IGTAI prototype, its architecture and its main functionalities, namely the support given in the different phases of the idea generation meeting.
Resumo:
Mathematical Program with Complementarity Constraints (MPCC) finds applica- tion in many fields. As the complementarity constraints fail the standard Linear In- dependence Constraint Qualification (LICQ) or the Mangasarian-Fromovitz constraint qualification (MFCQ), at any feasible point, the nonlinear programming theory may not be directly applied to MPCC. However, the MPCC can be reformulated as NLP problem and solved by nonlinear programming techniques. One of them, the Inexact Restoration (IR) approach, performs two independent phases in each iteration - the feasibility and the optimality phases. This work presents two versions of an IR algorithm to solve MPCC. In the feasibility phase two strategies were implemented, depending on the constraints features. One gives more importance to the complementarity constraints, while the other considers the priority of equality and inequality constraints neglecting the complementarity ones. The optimality phase uses the same approach for both algorithm versions. The algorithms were implemented in MATLAB and the test problems are from MACMPEC collection.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Mestrado em Engenharia Química
Resumo:
Dissertação de Mestrado apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Auditoria, sob orientação de Doutor José Campos Amorim
Resumo:
Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.
Resumo:
The aim of this study is to examine the implications of the IPPA in the perception of illness and wellbeing in MS patients. Methods - This is a quasi experimental study non-randomized study with 24 MS patients diagnosed at least 1 year before, and with an EDSS score of under 7. We used the IPPA in 3 groups of eight people in 3 Portuguese hospitals (Lisbon, Coimbra, and Porto). The sessions were held once a week for 90 minutes, over a period of 7 weeks. The instruments used were: We asked the subjects the question “Please classify the severity of your disease?” and used the Personal Wellbeing Scale (PWS) at the beginning (time A) and end (time B) of the IPPA. We used the SPSS version 20. A non-parametric statistical hypothesis test (Wilcoxon test) was used for the variable analysis. The intervention followed the recommendations of the Helsinki Declaration. Results – The results suggest that there are differences between time A and B, the perception of illness decreased (p<0.08), while wellbeing increased (p<0.01). Conclusions: The IPPA can play an important role in modifying the perception of disease severity and personal wellbeing.
Resumo:
Purpose: The most recent Varian® micro multileaf collimator(MLC), the High Definition (HD120) MLC, was modeled using the BEAMNRCMonte Carlo code. This model was incorporated into a Varian medical linear accelerator, for a 6 MV beam, in static and dynamic mode. The model was validated by comparing simulated profiles with measurements. Methods: The Varian® Trilogy® (2300C/D) accelerator model was accurately implemented using the state-of-the-art Monte Carlo simulation program BEAMNRC and validated against off-axis and depth dose profiles measured using ionization chambers, by adjusting the energy and the full width at half maximum (FWHM) of the initial electron beam. The HD120 MLC was modeled by developing a new BEAMNRC component module (CM), designated HDMLC, adapting the available DYNVMLC CM and incorporating the specific characteristics of this new micro MLC. The leaf dimensions were provided by the manufacturer. The geometry was visualized by tracing particles through the CM and recording their position when a leaf boundary is crossed. The leaf material density and abutting air gap between leaves were adjusted in order to obtain a good agreement between the simulated leakage profiles and EBT2 film measurements performed in a solid water phantom. To validate the HDMLC implementation, additional MLC static patterns were also simulated and compared to additional measurements. Furthermore, the ability to simulate dynamic MLC fields was implemented in the HDMLC CM. The simulation results of these fields were compared with EBT2 film measurements performed in a solid water phantom. Results: Overall, the discrepancies, with and without MLC, between the opened field simulations and the measurements using ionization chambers in a water phantom, for the off-axis profiles are below 2% and in depth-dose profiles are below 2% after the maximum dose depth and below 4% in the build-up region. On the conditions of these simulations, this tungsten-based MLC has a density of 18.7 g cm− 3 and an overall leakage of about 1.1 ± 0.03%. The discrepancies between the film measured and simulated closed and blocked fields are below 2% and 8%, respectively. Other measurements were performed for alternated leaf patterns and the agreement is satisfactory (to within 4%). The dynamic mode for this MLC was implemented and the discrepancies between film measurements and simulations are within 4%. Conclusions: The Varian® Trilogy® (2300 C/D) linear accelerator including the HD120 MLC was successfully modeled and simulated using the Monte CarloBEAMNRC code by developing an independent CM, the HDMLC CM, either in static and dynamic modes.