35 resultados para Free-riding consumers
em Instituto Politécnico do Porto, Portugal
Resumo:
Within a country-size asymmetric monetary union, idiosyncratic shocks and national fiscal stabilization policies cause asymmetric cross-border effects. These effects are a source of strategic interactions between noncoordinated fiscal and monetary policies: on the one hand, due to larger externalities imposed on the union, large countries face less incentives to develop free-riding fiscal policies; on the other hand, a larger strategic position vis-à-vis the central bank incentives the use of fiscal policy to, deliberately, influence monetary policy. Additionally, the existence of non-distortionary government financing may also shape policy interactions. As a result, optimal policy regimes may diverge not only across the union members, but also between the latter and the monetary union. In a two-country micro-founded New-Keynesian model for a monetary union, we consider two fiscal policy scenarios: (i) lump-sum taxes are raised to fully finance the government budget and (ii) lump-sum taxes do not ensure balanced budgets in each period; therefore, fiscal and monetary policies are expected to impinge on debt sustainability. For several degrees of country-size asymmetry, we compute optimal discretionary and dynamic non-cooperative policy games and compare their stabilization performance using a union-wide welfare measure. We also assess whether these outcomes could be improved, for the monetary union, through institutional policy arrangements. We find that, in the presence of government indebtedness, monetary policy optimally deviates from macroeconomic to debt stabilization. We also find that policy cooperation is always welfare increasing for the monetary union as a whole; however, indebted large countries may strongly oppose to this arrangement in favour of fiscal leadership. In this case, delegation of monetary policy to a conservative central bank proves to be fruitful to improve the union’s welfare.
Resumo:
With the electricity market liberalization, the distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity consumers. A fair insight on the consumers’ behavior will permit the definition of specific contract aspects based on the different consumption patterns. In order to form the different consumers’ classes, and find a set of representative consumption patterns we use electricity consumption data from a utility client’s database and two approaches: Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) for combining partitions in a clustering ensemble. While EAC uses a voting mechanism to produce a co-association matrix based on the pairwise associations obtained from N partitions and where each partition has equal weight in the combination process, the WEACS approach uses subsampling and weights differently the partitions. As a complementary step to the WEACS approach, we combine the partitions obtained in the WEACS approach with the ALL clustering ensemble construction method and we use the Ward Link algorithm to obtain the final data partition. The characterization of the obtained consumers’ clusters was performed using the C5.0 classification algorithm. Experiment results showed that the WEACS approach leads to better results than many other clustering approaches.
Resumo:
This paper deals with the establishment of a characterization methodology of electric power profiles of medium voltage (MV) consumers. The characterization is supported on the data base knowledge discovery process (KDD). Data Mining techniques are used with the purpose of obtaining typical load profiles of MV customers and specific knowledge of their customers’ consumption habits. In order to form the different customers’ classes and to find a set of representative consumption patterns, a hierarchical clustering algorithm and a clustering ensemble combination approach (WEACS) are used. Taking into account the typical consumption profile of the class to which the customers belong, new tariff options were defined and new energy coefficients prices were proposed. Finally, and with the results obtained, the consequences that these will have in the interaction between customer and electric power suppliers are analyzed.
Resumo:
The large penetration of intermittent resources, such as solar and wind generation, involves the use of storage systems in order to improve power system operation. Electric Vehicles (EVs) with gridable capability (V2G) can operate as a means for storing energy. This paper proposes an algorithm to be included in a SCADA (Supervisory Control and Data Acquisition) system, which performs an intelligent management of three types of consumers: domestic, commercial and industrial, that includes the joint management of loads and the charge/discharge of EVs batteries. The proposed methodology has been implemented in a SCADA system developed by the authors of this paper – the SCADA House Intelligent Management (SHIM). Any event in the system, such as a Demand Response (DR) event, triggers the use of an optimization algorithm that performs the optimal energy resources scheduling (including loads and EVs), taking into account the priorities of each load defined by the installation users. A case study considering a specific consumer with several loads and EVs is presented in this paper.
Resumo:
With the electricity market liberalization, distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity customers. In this environment all consumers are free to choose their electricity supplier. A fair insight on the customer´s behaviour will permit the definition of specific contract aspects based on the different consumption patterns. In this paper Data Mining (DM) techniques are applied to electricity consumption data from a utility client’s database. To form the different customer´s classes, and find a set of representative consumption patterns, we have used the Two-Step algorithm which is a hierarchical clustering algorithm. Each consumer class will be represented by its load profile resulting from the clustering operation. Next, to characterize each consumer class a classification model will be constructed with the C5.0 classification algorithm.
Resumo:
This paper consist in the establishment of a Virtual Producer/Consumer Agent (VPCA) in order to optimize the integrated management of distributed energy resources and to improve and control Demand Side Management DSM) and its aggregated loads. The paper presents the VPCA architecture and the proposed function-based organization to be used in order to coordinate the several generation technologies, the different load types and storage systems. This VPCA organization uses a frame work based on data mining techniques to characterize the costumers. The paper includes results of several experimental tests cases, using real data and taking into account electricity generation resources as well as consumption data.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Resumo:
The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.
Resumo:
Two chromatographic methods, gas chromatography with flow ionization detection (GC–FID) and liquid chromatography with ultraviolet detection (LC–UV), were used to determine furfuryl alcohol in several kinds of foundry resins, after application of an optimised extraction procedure. The GC method developed gave feasibility that did not depend on resin kind. Analysis by LC was suitable just for furanic resins. The presence of interference in the phenolic resins did not allow an appropriate quantification by LC. Both methods gave accurate and precise results. Recoveries were >94%; relative standard deviations were ≤7 and ≤0.3%, respectively for GC and LC methods. Good relative deviations between the two methods were found (≤3%).
Resumo:
Formaldehyde is a toxic component that is present in foundry resins. Its quantification is important to the characterisation of the resin (kind and degradation) as well as for the evaluation of free contaminants present in wastes generated by the foundry industry. The complexity of the matrices considered suggests the need for separative techniques. The method developed for the identification and quantification of formaldehyde in foundry resins is based on the determination of free carbonyl compounds by derivatization with 2,4-dinitrophenylhydrazine (DNPH), being adapted to the considered matrices using liquid chromatography (LC) with UV detection. Formaldehyde determinations in several foundry resins gave precise results. Mean recovery and R.S.D. were, respectively, >95 and 5%. Analyses by the hydroxylamine reference method gave comparable results. Results showed that hydroxylamine reference method is applicable just for a specific kind of resin, while the developed method has good performance for all studied resins.
Resumo:
Phenol is a toxic compound present in a wide variety of foundry resins. Its quantification is important for the characterization of the resins as well as for the evaluation of free contaminants present in foundry wastes. Two chromatographic methods, liquid chromatography with ultraviolet detection (LC-UV) and gas chromatography with flame ionization detection (GC-FID), for the analysis of free phenol in several foundry resins, after a simple extraction procedure (30 min), were developed. Both chromatographic methods were suitable for the determination of phenol in the studied furanic and phenolic resins, showing good selectivity, accuracy (recovery 99–100%; relative deviations <5%), and precision (coefficients of variation <6%). The used ASTM reference method was only found to be useful in the analysis of phenolic resins, while the LC and GC methods were applicable for all the studied resins. The developed methods reduce the time of analysis from 3.5 hours to about 30 min and can readily be used in routine quality control laboratories.
Resumo:
Celiac disease (CD) is an autoimmune enteropathy, characterized by an inappropriate T-cell-mediated immune response to the ingestion of certain dietary cereal proteins in genetically susceptible individuals. This disorder presents environmental, genetic, and immunological components. CD presents a prevalence of up to 1% in populations of European ancestry, yet a high percentage of cases remain underdiagnosed. The diagnosis and treatment should be made early since untreated disease causes growth retardation and atypical symptoms, like infertility or neurological disorders. The diagnostic criteria for CD, which requires endoscopy with small bowel biopsy, have been changing over the last few decades, especially due to the advent of serological tests with higher sensitivity and specificity. The use of serological markers can be very useful to rule out clinical suspicious cases and also to help monitor the patients, after adherence to a gluten-free diet. Since the current treatment consists of a life-long glutenfree diet, which leads to significant clinical and histological improvement, the standardization of an assay to assess in an unequivocal way gluten in gluten-free foodstuff is of major importance.
Resumo:
In this paper, we characterize two power indices introduced in [1] using two different modifications of the monotonicity property first stated by [2]. The sets of properties are easily comparable among them and with previous characterizations of other power indices.
Resumo:
This communication presents a novel kind of silicon nanomaterial: freestanding Si nanowire arrays (Si NWAs), which are synthesized facilely by one-step template-free electro-deoxidation of SiO2 in molten CaCl2. The self-assembling growth process of this material is also investigated preliminarily.
Resumo:
The TEM family of enzymes has had a crucial impact on the pharmaceutical industry due to their important role in antibiotic resistance. Even with the latest technologies in structural biology and genomics, no 3D structure of a TEM- 1/antibiotic complex is known previous to acylation. Therefore, the comprehension of their capability in acylate antibiotics is based on the protein macromolecular structure uncomplexed. In this work, molecular docking, molecular dynamic simulations, and relative free energy calculations were applied in order to get a comprehensive and thorough analysis of TEM-1/ampicillin and TEM-1/amoxicillin complexes. We described the complexes and analyzed the effect of ligand binding on the overall structure. We clearly demonstrate that the key residues involved in the stability of the ligand (hot-spots) vary with the nature of the ligand. Structural effects such as (i) the distances between interfacial residues (Ser70−Oγ and Lys73−Nζ, Lys73−Nζ and Ser130−Oγ, and Ser70−Oγ−Ser130−Oγ), (ii) side chain rotamer variation (Tyr105 and Glu240), and (iii) the presence of conserved waters can be also influenced by ligand binding. This study supports the hypothesis that TEM-1 suffers structural modifications upon ligand binding.