897 resultados para Distributed power generation
Resumo:
Control and optimization of flavor is the ultimate challenge for the food and flavor industry. The major route to flavor formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compd. The complexity of the reaction means that researchers turn to kinetic modeling in order to understand the control points of the reaction and to manipulate the flavor profile. Studies of the kinetics of flavor formation have developed over the past 30 years from single- response empirical models of binary aq. systems to sophisticated multi-response models in food matrixes, based on the underlying chem., with the power to predict the formation of some key aroma compds. This paper discusses in detail the development of kinetic models of thermal generation of flavor and looks at the challenges involved in predicting flavor.
Resumo:
Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
This article reports the results of an experiment that examined how demand aggregators can discipline vertically-integrated firms - generator and distributor-retailer holdings-, which have a high share in wholesale electricity market with uniform price double auction (UPDA). We initially develop a treatment where holding members redistribute the profit based on the imposition of supra-competitive prices, in equal proportions (50%-50%). Subsequently, we introduce a vertical disintegration (unbundling) treatment with holding-s information sharing, where profits are distributed according to market outcomes. Finally, a third treatment is performed to introduce two active demand aggregators, with flexible interruptible loads in real time. We found that the introduction of responsive demand aggregators neutralizes the power market and increases market efficiency, even beyond what is achieved through vertical disintegration.
Resumo:
The Complex Adaptive Systems, Cognitive Agents and Distributed Energy (CASCADE) project is developing a framework based on Agent Based Modelling (ABM). The CASCADE Framework can be used both to gain policy and industry relevant insights into the smart grid concept itself and as a platform to design and test distributed ICT solutions for smart grid based business entities. ABM is used to capture the behaviors of diff erent social, economic and technical actors, which may be defi ned at various levels of abstraction. It is applied to understanding their interactions and can be adapted to include learning processes and emergent patterns. CASCADE models ‘prosumer’ agents (i.e., producers and/or consumers of energy) and ‘aggregator’ agents (e.g., traders of energy in both wholesale and retail markets) at various scales, from large generators and Energy Service Companies down to individual people and devices. The CASCADE Framework is formed of three main subdivisions that link models of electricity supply and demand, the electricity market and power fl ow. It can also model the variability of renewable energy generation caused by the weather, which is an important issue for grid balancing and the profi tability of energy suppliers. The development of CASCADE has already yielded some interesting early fi ndings, demonstrating that it is possible for a mediating agent (aggregator) to achieve stable demandfl attening across groups of domestic households fi tted with smart energy control and communication devices, where direct wholesale price signals had previously been found to produce characteristic complex system instability. In another example, it has demonstrated how large changes in supply mix can be caused even by small changes in demand profi le. Ongoing and planned refi nements to the Framework will support investigation of demand response at various scales, the integration of the power sector with transport and heat sectors, novel technology adoption and diffusion work, evolution of new smart grid business models, and complex power grid engineering and market interactions.
Resumo:
Distributed generation plays a key role in reducing CO2 emissions and losses in transmission of power. However, due to the nature of renewable resources, distributed generation requires suitable control strategies to assure reliability and optimality for the grid. Multi-agent systems are perfect candidates for providing distributed control of distributed generation stations as well as providing reliability and flexibility for the grid integration. The proposed multi-agent energy management system consists of single-type agents who control one or more gird entities, which are represented as generic sub-agent elements. The agent applies one control algorithm across all elements and uses a cost function to evaluate the suitability of the element as a supplier. The behavior set by the agent's user defines which parameters of an element have greater weight in the cost function, which allows the user to specify the preference on suppliers dynamically. This study shows the ability of the multi-agent energy management system to select suppliers according to the selection behavior given by the user. The optimality of the supplier for the required demand is ensured by the cost function based on the parameters of the element.
Resumo:
Flash floods pose a significant danger for life and property. Unfortunately, in arid and semiarid environment the runoff generation shows a complex non-linear behavior with a strong spatial and temporal non-uniformity. As a result, the predictions made by physically-based simulations in semiarid areas are subject to great uncertainty, and a failure in the predictive behavior of existing models is common. Thus better descriptions of physical processes at the watershed scale need to be incorporated into the hydrological model structures. For example, terrain relief has been systematically considered static in flood modelling at the watershed scale. Here, we show that the integrated effect of small distributed relief variations originated through concurrent hydrological processes within a storm event was significant on the watershed scale hydrograph. We model these observations by introducing dynamic formulations of two relief-related parameters at diverse scales: maximum depression storage, and roughness coefficient in channels. In the final (a posteriori) model structure these parameters are allowed to be both time-constant or time-varying. The case under study is a convective storm in a semiarid Mediterranean watershed with ephemeral channels and high agricultural pressures (the Rambla del Albujón watershed; 556 km 2 ), which showed a complex multi-peak response. First, to obtain quasi-sensible simulations in the (a priori) model with time-constant relief-related parameters, a spatially distributed parameterization was strictly required. Second, a generalized likelihood uncertainty estimation (GLUE) inference applied to the improved model structure, and conditioned to observed nested hydrographs, showed that accounting for dynamic relief-related parameters led to improved simulations. The discussion is finally broadened by considering the use of the calibrated model both to analyze the sensitivity of the watershed to storm motion and to attempt the flood forecasting of a stratiform event with highly different behavior.
Resumo:
In this paper, we develop an energy-efficient resource-allocation scheme with proportional fairness for downlink multiuser orthogonal frequency-division multiplexing (OFDM) systems with distributed antennas. Our aim is to maximize energy efficiency (EE) under the constraints of the overall transmit power of each remote access unit (RAU), proportional fairness data rates, and bit error rates (BERs). Because of the nonconvex nature of the optimization problem, obtaining the optimal solution is extremely computationally complex. Therefore, we develop a low-complexity suboptimal algorithm, which separates subcarrier allocation and power allocation. For the low-complexity algorithm, we first allocate subcarriers by assuming equal power distribution. Then, by exploiting the properties of fractional programming, we transform the nonconvex optimization problem in fractional form into an equivalent optimization problem in subtractive form, which includes a tractable solution. Next, an optimal energy-efficient power-allocation algorithm is developed to maximize EE while maintaining proportional fairness. Through computer simulation, we demonstrate the effectiveness of the proposed low-complexity algorithm and illustrate the fundamental trade off between energy and spectral-efficient transmission designs.
Resumo:
Wind generation's contribution to supporting peak electricity demand is one of the key questions in wind integration studies. Differently from conventional units, the available outputs of different wind farms cannot be approximated as being statistically independent, and hence near-zero wind output is possible across an entire power system. This paper will review the risk model structures currently used to assess wind's capacity value, along with discussion of the resulting data requirements. A central theme is the benefits from performing statistical estimation of the joint distribution for demand and available wind capacity, focusing attention on uncertainties due to limited histories of wind and demand data; examination of Great Britain data from the last 25 years shows that the data requirements are greater than generally thought. A discussion is therefore presented into how analysis of the types of weather system which have historically driven extreme electricity demands can help to deliver robust insights into wind's contribution to supporting demand, even in the face of such data limitations. The role of the form of the probability distribution for available conventional capacity in driving wind capacity credit results is also discussed.
Resumo:
BACKGROUND: Social networks are common in digital health. A new stream of research is beginning to investigate the mechanisms of digital health social networks (DHSNs), how they are structured, how they function, and how their growth can be nurtured and managed. DHSNs increase in value when additional content is added, and the structure of networks may resemble the characteristics of power laws. Power laws are contrary to traditional Gaussian averages in that they demonstrate correlated phenomena. OBJECTIVES: The objective of this study is to investigate whether the distribution frequency in four DHSNs can be characterized as following a power law. A second objective is to describe the method used to determine the comparison. METHODS: Data from four DHSNs—Alcohol Help Center (AHC), Depression Center (DC), Panic Center (PC), and Stop Smoking Center (SSC)—were compared to power law distributions. To assist future researchers and managers, the 5-step methodology used to analyze and compare datasets is described. RESULTS: All four DHSNs were found to have right-skewed distributions, indicating the data were not normally distributed. When power trend lines were added to each frequency distribution, R(2) values indicated that, to a very high degree, the variance in post frequencies can be explained by actor rank (AHC .962, DC .975, PC .969, SSC .95). Spearman correlations provided further indication of the strength and statistical significance of the relationship (AHC .987. DC .967, PC .983, SSC .993, P<.001). CONCLUSIONS: This is the first study to investigate power distributions across multiple DHSNs, each addressing a unique condition. Results indicate that despite vast differences in theme, content, and length of existence, DHSNs follow properties of power laws. The structure of DHSNs is important as it gives insight to researchers and managers into the nature and mechanisms of network functionality. The 5-step process undertaken to compare actor contribution patterns can be replicated in networks that are managed by other organizations, and we conjecture that patterns observed in this study could be found in other DHSNs. Future research should analyze network growth over time and examine the characteristics and survival rates of superusers.
Resumo:
Thermal generation is a vital component of mature and reliable electricity markets. As the share of renewable electricity in such markets grows, so too do the challenges associated with its variability. Proposed solutions to these challenges typically focus on alternatives to primary generation, such as energy storage, demand side management, or increased interconnection. Less attention is given to the demands placed on conventional thermal generation or its potential for increased flexibility. However, for the foreseeable future, conventional plants will have to operate alongside new renewables and have an essential role in accommodating increasing supply-side variability. This paper explores the role that conventional generation has to play in managing variability through the sub-system case study of Northern Ireland, identifying the significance of specific plant characteristics for reliable system operation. Particular attention is given to the challenges of wind ramping and the need to avoid excessive wind curtailment. Potential for conflict is identified with the role for conventional plant in addressing these two challenges. Market specific strategies for using the existing fleet of generation to reduce the impact of renewable resource variability are proposed, and wider lessons from the approach taken are identified.
Resumo:
The different parameters used for the photoactivation process provide changes in the degree of conversion (DC%) and temperature rise (TR) of the composite resins. Thus, the purpose of this study was to evaluate the DC (%) and TR of the microhybrid composite resin photoactivated by a new generation LED. For the KBr pellet technique, the composite resin was placed into a metallic mould (1-mm thickness and 4-mm diameter) and photoactivated as follows: continuous LED LCU with different power density values (50-1000 mW/cm(2)). The measurements for the DC (%) were made in a FTIR Spectrometer Bomen (model MB-102, Quebec-Canada). The spectroscopy (FTIR) spectra for both uncured and cured samples were analyzed using an accessory for the diffuse reflectance. The measurements were recorded in the absorbance operating under the following conditions: 32 scans, 4-cm(-1) resolution, and a 300 to 4000-cm(-1) wavelength. The percentage of unreacted carbon-carbon double bonds (% C=C) was determined from the ratio of the absorbance intensities of aliphatic C=C (peak at 1638 cm(-1)) against an internal standard before and after the curing of the specimen: aromatic C-C (peak at 1608 cm-1). For the TR, the samples were made in a metallic mould (2-mm thickness and 4-mm diameter) and photoactivated during 5, 10, and 20 s. The thermocouple was attached to the multimeter to allow the temperature readings. The DC (%) and TR were calculated by the standard technique and submitted to ANOVA and Tukey`s test (p < 0.05). The degree of conversion values varied from 35.0 (+/- 1.3) to 45.0 (+/- 2.4) for 5 s, 45.0 (+/- 1.3) to 55.0 (+/- 2.4) for 10 s, and 47.0 (+/- 1.3) to 52.0 (+/- 2.4) for 20 s. For the TR, the values ranged from 0.3 (+/- 0.01) to 5.4 (+/- 0.11)degrees C for 5 s, from 0.5 (+/- 0.02) to 9.3 (+/- 0.28)degrees C for 10 s, and from 1.0 (+/- 0.06) to 15.0 (+/- 0.95)degrees C for 20 s. The power densities and irradiation times showed a significant effect on the degree of conversion and temperature rise.