933 resultados para consumer, control, demand, electrical energy, network, potential, response, shifting, vehicles
Resumo:
Electrical energy storage is a really important issue nowadays. As electricity is not easy to be directly stored, it can be stored in other forms and converted back to electricity when needed. As a consequence, storage technologies for electricity can be classified by the form of storage, and in particular we focus on electrochemical energy storage systems, better known as electrochemical batteries. Largely the more widespread batteries are the Lead-Acid ones, in the two main types known as flooded and valve-regulated. Batteries need to be present in many important applications such as in renewable energy systems and in motor vehicles. Consequently, in order to simulate these complex electrical systems, reliable battery models are needed. Although there exist some models developed by experts of chemistry, they are too complex and not expressed in terms of electrical networks. Thus, they are not convenient for a practical use by electrical engineers, who need to interface these models with other electrical systems models, usually described by means of electrical circuits. There are many techniques available in literature by which a battery can be modeled. Starting from the Thevenin based electrical model, it can be adapted to be more reliable for Lead-Acid battery type, with the addition of a parasitic reaction branch and a parallel network. The third-order formulation of this model can be chosen, being a trustworthy general-purpose model, characterized by a good ratio between accuracy and complexity. Considering the equivalent circuit network, all the useful equations describing the battery model are discussed, and then implemented one by one in Matlab/Simulink. The model has been finally validated, and then used to simulate the battery behaviour in different typical conditions.
Resumo:
The synthetic control (SC) method has been recently proposed as an alternative to estimate treatment effects in comparative case studies. The SC relies on the assumption that there is a weighted average of the control units that reconstruct the potential outcome of the treated unit in the absence of treatment. If these weights were known, then one could estimate the counterfactual for the treated unit using this weighted average. With these weights, the SC would provide an unbiased estimator for the treatment effect even if selection into treatment is correlated with the unobserved heterogeneity. In this paper, we revisit the SC method in a linear factor model where the SC weights are considered nuisance parameters that are estimated to construct the SC estimator. We show that, when the number of control units is fixed, the estimated SC weights will generally not converge to the weights that reconstruct the factor loadings of the treated unit, even when the number of pre-intervention periods goes to infinity. As a consequence, the SC estimator will be asymptotically biased if treatment assignment is correlated with the unobserved heterogeneity. The asymptotic bias only vanishes when the variance of the idiosyncratic error goes to zero. We suggest a slight modification in the SC method that guarantees that the SC estimator is asymptotically unbiased and has a lower asymptotic variance than the difference-in-differences (DID) estimator when the DID identification assumption is satisfied. If the DID assumption is not satisfied, then both estimators would be asymptotically biased, and it would not be possible to rank them in terms of their asymptotic bias.
Resumo:
Mode of access: Internet.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Mode of access: Internet.
Resumo:
"200/2007"
Resumo:
Caption title.
Resumo:
Thermal analysis methods (differential scanning calorimetry, thermogravimetric analysis, and dynamic mechanical thermal analysis) were used to characterize the nature of polyester-melamine coating matrices prepared under nonisothermal, high-temperature, rapid-cure conditions. The results were interpreted in terms of the formation of two interpenetrating networks with different glass-transition temperatures (a cocondensed polyester-melamine network and a self-condensed melamine-melamine network), a phenomenon not generally seen in chemically similar, isothermally cured matrices. The self-condensed network manifested at high melamine levels, but the relative concentrations of the two networks were critically dependent on the cure conditions. The optimal cure (defined in terms of the attainment of a peak metal temperature) was achieved at different oven temperatures and different oven dwell times, and so the actual energy absorbed varied over a wide range. Careful control of the energy absorption, by the selection of appropriate cure conditions, controlled the relative concentrations of the two networks and, therefore, the flexibility and hardness of the resultant coatings. (C) 2003 Wiley Periodicals, Inc. J Polym Sci Part A: Polym Cbem 41: 1603-1621, 2003.
Resumo:
Flow control in Computer Communication systems is generally a multi-layered structure, consisting of several mechanisms operating independently at different levels. Evaluation of the performance of networks in which different flow control mechanisms act simultaneously is an important area of research, and is examined in depth in this thesis. This thesis presents the modelling of a finite resource computer communication network equipped with three levels of flow control, based on closed queueing network theory. The flow control mechanisms considered are: end-to-end control of virtual circuits, network access control of external messages at the entry nodes and the hop level control between nodes. The model is solved by a heuristic technique, based on an equivalent reduced network and the heuristic extensions to the mean value analysis algorithm. The method has significant computational advantages, and overcomes the limitations of the exact methods. It can be used to solve large network models with finite buffers and many virtual circuits. The model and its heuristic solution are validated by simulation. The interaction between the three levels of flow control are investigated. A queueing model is developed for the admission delay on virtual circuits with end-to-end control, in which messages arrive from independent Poisson sources. The selection of optimum window limit is considered. Several advanced network access schemes are postulated to improve the network performance as well as that of selected traffic streams, and numerical results are presented. A model for the dynamic control of input traffic is developed. Based on Markov decision theory, an optimal control policy is formulated. Numerical results are given and throughput-delay performance is shown to be better with dynamic control than with static control.
Resumo:
Neuroimaging studies in bipolar disorder report gray matter volume (GMV) abnormalities in neural regions implicated in emotion regulation. This includes a reduction in ventral/orbital medial prefrontal cortex (OMPFC) GMV and, inconsistently, increases in amygdala GMV. We aimed to examine OMPFC and amygdala GMV in bipolar disorder type 1 patients (BPI) versus healthy control participants (HC), and the potential confounding effects of gender, clinical and illness history variables and psychotropic medication upon any group differences that were demonstrated in OMPFC and amygdala GMV. Images were acquired from 27 BPI (17 euthymic, 10 depressed) and 28 age- and gender-matched HC in a 3T Siemens scanner. Data were analyzed with SPM5 using voxel-based morphometry (VBM) to assess main effects of diagnostic group and gender upon whole brain (WB) GMV. Post-hoc analyses were subsequently performed using SPSS to examine the extent to which clinical and illness history variables and psychotropic medication contributed to GMV abnormalities in BPI in a priori and non-a priori regions has demonstrated by the above VBM analyses. BPI showed reduced GMV in bilateral posteromedial rectal gyrus (PMRG), but no abnormalities in amygdala GMV. BPI also showed reduced GMV in two non-a priori regions: left parahippocampal gyrus and left putamen. For left PMRG GMV, there was a significant group by gender by trait anxiety interaction. GMV was significantly reduced in male low-trait anxiety BPI versus male low-trait anxiety HC, and in high- versus low-trait anxiety male BPI. Our results show that in BPI there were significant effects of gender and trait-anxiety, with male BPI and those high in trait-anxiety showing reduced left PMRG GMV. PMRG is part of medial prefrontal network implicated in visceromotor and emotion regulation.
Resumo:
Modern civilization has developed principally through man's harnessing of forces. For centuries man had to rely on wind, water and animal force as principal sources of power. The advent of the industrial revolution, electrification and the development of new technologies led to the application of wood, coal, gas, petroleum, and uranium to fuel new industries, produce goods and means of transportation, and generate the electrical energy which has become such an integral part of our lives. The geometric growth in energy consumption, coupled with the world's unrestricted growth in population, has caused a disproportionate use of these limited natural resources. The resulting energy predicament could have serious consequences within the next half century unless we commit ourselves to the philosophy of effective energy conservation and management. National legislation, along with the initiative of private industry and growing interest in the private sector has played a major role in stimulating the adoption of energy-conserving laws, technologies, measures, and practices. It is a matter of serious concern in the United States, where ninety-five percent of the commercial and industrial facilities which will be standing in the year 2000 - many in need of retrofit - are currently in place. To conserve energy, it is crucial to first understand how a facility consumes energy, how its users' needs are met, and how all internal and external elements interrelate. To this purpose, the major thrust of this report will be to emphasize the need to develop an energy conservation plan that incorporates energy auditing and surveying techniques. Numerous energy-saving measures and practices will be presented ranging from simple no-cost opportunities to capital intensive investments.
Resumo:
Energy is a vital resource for social and economic development. In the present scenario, the search for alternative energy sources has become fundamental, especially after the oil crises between 1973 and 1979, the Chernobyl nuclear accident in 1986 and the Kyoto Protocol in 1997. The demand for the development of new alternative energy sources aims to complement existing forms allows to meet the demand for energy consumption with greater security. Brazil, with the guideline of not dirtying the energy matrix by the fossil fuels exploitation and the recent energy crisis caused by the lack of rains, directs energy policies for the development of other renewable energy sources, complementing the hydric. This country is one of the countries that stand out for power generation capacity from the winds in several areas, especially Rio Grande do Norte (RN), which is one of the states with highest installed power and great potential to be explored. In this context arises the purpose of this work to identify the incentive to develop policies of wind energy in Rio Grande do Norte. The study was conducted by a qualitative methodology of data analysis called content analysis, oriented for towards message characteristics, its informational value, the words, arguments and ideas expressed in it, constituting a thematic analysis. To collect the data interviews were conducted with managers of major organizations related to wind energy in Brazil and in the state of Rio Grande do Norte. The identification of incentive policies was achieved in three stages: the first seeking incentives policies in national terms, which are applied to all states, the second with the questionnaire application and the third to research and data collection for the development of the installed power of the RN as compared to other states. At the end, the results demonstrated hat in Rio Grande do Norte state there is no incentive policy for the development of wind power set and consolidated, specific actions in order to optimize the bureaucratic issues related to wind farms, especially on environmental issues. The absence of this policy hinders the development of wind energy RN, considering result in reduced competitiveness and performance in recent energy auctions. Among the perceived obstacles include the lack of hand labor sufficient to achieve the reporting and analysis of environmental licenses, the lack of updating the wind Atlas of the state, a shortfall of tax incentives. Added to these difficulties excel barriers in infrastructure and logistics, with the lack of a suitable port for large loads and the need for reform, maintenance and duplication of roads and highways that are still loss-making. It is suggested as future work the relationship of the technology park of energy and the development of wind power in the state, the influence of the technology park to attract businesses and industries in the wind sector to settle in RN and a comparison of incentive policies to development of wind energy in the Brazilian states observing wind development in the same states under study.
Resumo:
In this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields.
In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.
The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.
This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.
The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.
The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.
EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.
Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.
The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.
This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.
The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.
Resumo:
Greenhouses have become an invaluable source of year-round food production. Further development of viable and efficient high performance greenhouses is important for future food security. Closing the greenhouse envelope from the environment can provide benefits in space heating energy savings, pest control, and CO2 enrichment. This requires the application of a novel air conditioning system to handle the high cooling loads experienced by a greenhouse. Liquid desiccant air-conditioning (LDAC) have been found to provide high latent cooling capacities, which is perfect for the application of a humid greenhouse microclimate. TRNSYS simulations were undertaken to study the feasibility of two liquid desiccant dehumidification systems based on their capacity to control the greenhouse microclimate, and their cooling performance. The base model (B-LDAC) included a natural gas boiler, and two cooling systems for seasonal operation. The second model (HP-LDAC) was a hybrid liquid desiccant-heat pump dehumidification system. The average tCOPdehum and tCOPtotal of the B-LDAC system increased from 0.40 and 0.56 in January to 0.94 and 1.09 in June. Increased load and performance during a sample summer day improved these values to 3.5 and 3.0, respectively. The average eCOPdehum and eCOPtotal values were 1.0 and 1.8 in winter, and 1.7 and 2.1 in summer. The HP-LDAC system produced similar daily performance trends where the annual average eCOPdehum and eCOPtotal values were 1.3 and 1.2, but the sample day saw peaks of 2.4 and 3.2, respectively. The B-LDAC and HP-LDAC results predicted greenhouse temperatures exceeding 30°C for 34% and 17% of the month of July, respectively. Similarly, humidity levels increased in summer months, with a maximum of 14% of the time spent over 80% in May for both models. The percentage of annual savings in space heating energy associated with closing the greenhouse to ventilation was 34%. The additional annual regeneration energy input was reduced by 26% to 526 kWhm-2, with the implementation of a heat recovery ventilator on the regeneration exhaust air. The models also predicted an electrical energy input of 245 kWhm-2 and 305 kWhm-2 for the B-LDAC and HP-LDAC simulations, respectively.