907 resultados para Power and load factor


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Energia na Agricultura) - FCA

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work is to determine the membership functions for the construction of a fuzzy controller to evaluate the energy situation of the company with respect to load and power factors. The energy assessment of a company is performed by technicians and experts based on the indices of load and power factors, and analysis of the machines used in production processes. This assessment is conducted periodically to detect whether the procedures performed by employees in relation to how of use electricity energy are correct. With a fuzzy controller, this performed can be done by machines. The construction of a fuzzy controller is initially characterized by the definition of input and output variables, and their associated membership functions. We also need to define a method of inference and a processor output. Finally, you need the help of technicians and experts to build a rule base, consisting of answers that provide these professionals in function of characteristics of the input variables. The controller proposed in this paper has as input variables load and power factors, and output the company situation. Their membership functions representing fuzzy sets called by linguistic qualities, as “VERY BAD” and “GOOD”. With the method of inference Mandani and the processor to exit from the Center of Area chosen, the structure of a fuzzy controller is established, simply by the choice by technicians and experts of the field energy to determine a set of rules appropriate for the chosen company. Thus, the interpretation of load and power factors by software comes to meeting the need of creating a single index that indicates an overall basis (rational and efficient) as the energy is being used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Federal Highway Administration (FHWA) mandated utilizing the Load and Resistance Factor Design (LRFD) approach for all new bridges initiated in the United States after October 1, 2007. To achieve part of this goal, a database for Drilled Shaft Foundation Testing (DSHAFT) was developed and reported on by Garder, Ng, Sritharan, and Roling in 2012. DSHAFT is aimed at assimilating high-quality drilled shaft test data from Iowa and the surrounding regions. DSHAFT is currently housed on a project website (http://srg.cce.iastate.edu/dshaft) and contains data for 41 drilled shaft tests. The objective of this research was to utilize the DSHAFT database and develop a regional LRFD procedure for drilled shafts in Iowa with preliminary resistance factors using a probability-based reliability theory. This was done by examining current design and construction practices used by the Iowa Department of Transportation (DOT) as well as recommendations given in the American Association of State Highway and Transportation Officials (AASHTO) LRFD Bridge Design Specifications and the FHWA drilled shaft guidelines. Various analytical methods were used to estimate side resistance and end bearing of drilled shafts in clay, sand, intermediate geomaterial (IGM), and rock. Since most of the load test results obtained from O-cell do not pass the 1-in. top displacement criterion used by the Iowa DOT and the 5% of shaft diameter for top displacement criterion recommended by AASHTO, three improved procedures are proposed to generate and extend equivalent top load-displacement curves that enable the quantification of measured resistances corresponding to the displacement criteria. Using the estimated and measured resistances, regional resistance factors were calibrated following the AASHTO LRFD framework and adjusted to resolve any anomalies observed among the factors. To illustrate the potential and successful use of drilled shafts in Iowa, the design procedures of drilled shaft foundations were demonstrated and the advantages of drilled shafts over driven piles were addressed in two case studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brazil is the world's largest producer of sugar cane and which in the state of São Paulo concentrate the greatest amount of sugar cane field of the country. The sugar-alcohol sector has the capacity to produce sufficient thermal and electrical energy to be used in their process of production and commercialize of surplus in electricity distribution network. Therefore it is necessary to evaluate the energy efficiency and rationality within the mill. Accordingly this research proposed analyze the sugar-alcohol mill's sectors globally and individually, located in the west center of the São Paulo state, using the valuation methodology employed by the Agência Nacional de Energia Elétrica (ANEEL) in the industries that do not have systems of cogeneration. In this analysis, the hyperboloids of load and potency were applied based on the indexes of potency factor and load factor that allow estimate the efficiency and rationality. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a distributed predictive control methodology for indoor thermal comfort that optimizes the consumption of a limited shared energy resource using an integrated demand-side management approach that involves a power price auction and an appliance loads allocation scheme. The control objective for each subsystem (house or building) aims to minimize the energy cost while maintaining the indoor temperature inside comfort limits. In a distributed coordinated multi-agent ecosystem, each house or building control agent achieves its objectives while sharing, among them, the available energy through the introduction of particular coupling constraints in their underlying optimization problem. Coordination is maintained by a daily green energy auction bring in a demand-side management approach. Also the implemented distributed MPC algorithm is described and validated with simulation studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of distributed generation and smart grid research works are dedicated to network operation parameters studies, reliability, etc. However, many of these works normally uses traditional test systems, for instance, IEEE test systems. This paper proposes voltage magnitude and reliability studies in presence of fault conditions, considering realistic conditions found in countries like Brazil. The methodology considers a hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models and a remedial action algorithm which is based on optimal power flow. To illustrate the application of the proposed method, the paper includes a case study that considers a real 12-bus sub-transmission network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integration of wind power in eletricity generation brings new challenges to unit commitment due to the random nature of wind speed. For this particular optimisation problem, wind uncertainty has been handled in practice by means of conservative stochastic scenario-based optimisation models, or through additional operating reserve settings. However, generation companies may have different attitudes towards operating costs, load curtailment, or waste of wind energy, when considering the risk caused by wind power variability. Therefore, alternative and possibly more adequate approaches should be explored. This work is divided in two main parts. Firstly we survey the main formulations presented in the literature for the integration of wind power in the unit commitment problem (UCP) and present an alternative model for the wind-thermal unit commitment. We make use of the utility theory concepts to develop a multi-criteria stochastic model. The objectives considered are the minimisation of costs, load curtailment and waste of wind energy. Those are represented by individual utility functions and aggregated in a single additive utility function. This last function is adequately linearised leading to a mixed-integer linear program (MILP) model that can be tackled by general-purpose solvers in order to find the most preferred solution. In the second part we discuss the integration of pumped-storage hydro (PSH) units in the UCP with large wind penetration. Those units can provide extra flexibility by using wind energy to pump and store water in the form of potential energy that can be generated after during peak load periods. PSH units are added to the first model, yielding a MILP model with wind-hydro-thermal coordination. Results showed that the proposed methodology is able to reflect the risk profiles of decision makers for both models. By including PSH units, the results are significantly improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fatigue crack behavior in metals and alloys under constant amplitude test conditions is usually described by relationships between the crack growth rate da/dN and the stress intensity factor range Delta K. In the present work, an enhanced two-parameter exponential equation of fatigue crack growth was introduced in order to describe sub-critical crack propagation behavior of Al 2524-T3 alloy, commonly used in aircraft engineering applications. It was demonstrated that besides adequately correlating the load ratio effects, the exponential model also accounts for the slight deviations from linearity shown by the experimental curves. A comparison with Elber, Kujawski and "Unified Approach" models allowed for verifying the better performance, when confronted to the other tested models, presented by the exponential model. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, experimental results are reported for a small scale cogeneration plant for power and refrigeration purposes. The plant includes a natural gas microturbine and an ammonia/water absorption chiller fired by steam. The system was tested under different turbine loads, steam pressures and chiller outlet temperatures. An evaluation based on the 1st and 2nd Laws of Thermodynamics was also performed. For the ambient temperature around 24°C and microturbine at full load, the plant is able to provide 19 kW of saturated steam at 5.3 bar (161 °C), corresponding to 9.2 kW of refrigeration at -5 °C (COP = 0.44). From a 2nd law point-of-view, it was found that there is an optimal chiller outlet temperature that maximizes the chiller exergetic efficiency. As expected, the microturbine presented the highest irreversibilities, followed by the absorption chiller and the HRSG. In order to reduce the plant exergy destruction, it is recommended a new design for the HRSG and a new insulation for the exhaust pipe. © 2013 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The fatigue crack behavior in metals and alloys under constant amplitude test conditions is usually described by relationships between the crack growth rate da/dN and the stress intensity factor range Delta K. In the present work, an enhanced two-parameter exponential equation of fatigue crack growth was introduced in order to describe sub-critical crack propagation behavior of Al 2524-T3 alloy, commonly used in aircraft engineering applications. It was demonstrated that besides adequately correlating the load ratio effects, the exponential model also accounts for the slight deviations from linearity shown by the experimental curves. A comparison with Elber, Kujawski and "Unified Approach" models allowed for verifying the better performance, when confronted to the other tested models, presented by the exponential model. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In humans, theta band (5-7 Hz) power typically increases when performing cognitively demanding working memory (WM) tasks, and simultaneous EEG-fMRI recordings have revealed an inverse relationship between theta power and the BOLD (blood oxygen level dependent) signal in the default mode network during WM. However, synchronization also plays a fundamental role in cognitive processing, and the level of theta and higher frequency band synchronization is modulated during WM. Yet, little is known about the link between BOLD, EEG power, and EEG synchronization during WM, and how these measures develop with human brain maturation or relate to behavioral changes. We examined EEG-BOLD signal correlations from 18 young adults and 15 school-aged children for age-dependent effects during a load-modulated Sternberg WM task. Frontal load (in-)dependent EEG theta power was significantly enhanced in children compared to adults, while adults showed stronger fMRI load effects. Children demonstrated a stronger negative correlation between global theta power and the BOLD signal in the default mode network relative to adults. Therefore, we conclude that theta power mediates the suppression of a task-irrelevant network. We further conclude that children suppress this network even more than adults, probably from an increased level of task-preparedness to compensate for not fully mature cognitive functions, reflected in lower response accuracy and increased reaction time. In contrast to power, correlations between instantaneous theta global field synchronization and the BOLD signal were exclusively positive in both age groups but only significant in adults in the frontal-parietal and posterior cingulate cortices. Furthermore, theta synchronization was weaker in children and was--in contrast to EEG power--positively correlated with response accuracy in both age groups. In summary we conclude that theta EEG-BOLD signal correlations differ between spectral power and synchronization and that these opposite correlations with different distributions undergo similar and significant neuronal developments with brain maturation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, many studies about a network active during rest and deactivated during tasks emerged in the literature: the default mode network (DMN). Spatial and temporal DMN features are important markers for psychiatric diseases. Another prominent indicator of cognitive functioning, yielding information about the mental condition in health and disease, is working memory (WM) processing. In EEG studies, frontal-midline theta power has been shown to increase with load during WM retention in healthy subjects. From these findings, the conclusion can be drawn that an increase in resting state DMN activity may go along with an increase in theta power in high-load WM conditions. We followed this hypothesis in a study on 17 healthy subjects performing a visual Sternberg WM task. The DMN was obtained by a BOLD-ICA approach and its dynamics represented by the percent-strength during pre-stimulus periods. DMN dynamics were temporally correlated with EEG theta spectral power from retention intervals. This so-called covariance mapping yielded the spatial distribution of the theta EEG fluctuations associated with the dynamics of the DMN. In line with previous findings, theta power was increased at frontal-midline electrodes in high- versus low-load conditions during early WM retention. However, load-dependent correlations of DMN with theta power resulted in primarily positive correlations in low-load conditions, while during high-load conditions negative correlations of DMN activity and theta power were observed at frontal-midline electrodes. This DMN-dependent load effect reached significance during later retention. Our results show a complex and load-dependent interaction of pre-stimulus DMN activity and theta power during retention, varying over the course of the retention period. Since both, WM performance and DMN activity, are markers of mental health, our results could be important for further investigations of psychiatric populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developmental assembly of the renal microcirculation is a precise and coordinated process now accessible to experimental scrutiny. Although definition of the cellular and molecular determinants is incomplete, recent findings have reframed concepts and questions about the origins of vascular cells in the glomerulus and the molecules that direct cell recruitment, specialization and morphogenesis. New findings illustrate principles that may be applied to defining critical steps in microvascular repair following glomerular injury. Developmental assembly of endothelial, mesangial and epithelial cells into glomerular capillaries requires that a coordinated, temporally defined series of steps occur in an anatomically ordered sequence. Recent evidence shows that both vasculogenic and angiogenic processes participate. Local signals direct cell migration, proliferation, differentiation, cell-cell recognition, formation of intercellular connections, and morphogenesis. Growth factor receptor tyrosine kinases on vascular cells are important mediators of many of these events. Cultured cell systems have suggested that basic fibroblast growth factor (bFGF), hepatocyte growth factor (HGF), and vascular endothelial growth factor (VEGF) promote endothelial cell proliferation, migration or morphogenesis, while genetic deletion experiments have defined an important role for PDGF beta receptors and platelet-derived growth factor (PDGF) B in glomerular development. Receptor tyrosine kinases that convey non-proliferative signals also contribute in kidney and other sites. The EphB1 receptor, one of a diverse class of Eph receptors implicated in neural cell targeting, directs renal endothelial migration, cell-cell recognition and assembly, and is expressed with its ligand in developing glomeruli. Endothelial TIE2 receptors bind angiopoietins (1 and 2), the products of adjacent supportive cells, to signals direct capillary maturation in a sequence that defines cooperative roles for cells of different lineages. Ultimately, definition of the cellular steps and molecular sequence that direct microvascular cell assembly promises to identify therapeutic targets for repair and adaptive remodeling of injured glomeruli.